Jan 29 12:06:01 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 12:06:01 crc restorecon[4583]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:01 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:06:02 crc restorecon[4583]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:06:03 crc kubenswrapper[4660]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.221410 4660 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229870 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229902 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229912 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229921 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229929 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229937 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229946 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229955 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229964 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229972 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229982 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.229992 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230001 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230009 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230017 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230025 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230035 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230046 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230054 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230063 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230071 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230080 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230089 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230097 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230106 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230114 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230122 4660 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230130 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230139 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230151 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230160 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230169 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230177 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230185 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230193 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230200 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230208 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230217 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230224 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230232 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230242 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230252 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230260 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230269 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230278 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230286 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230294 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230302 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230310 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230317 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230325 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230332 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230340 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230348 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230356 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230364 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230372 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230379 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230388 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230395 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230403 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230410 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230418 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230426 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230434 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230441 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230449 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230460 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230469 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230477 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.230484 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.231966 4660 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232229 4660 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232264 4660 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232284 4660 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232295 4660 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232303 4660 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232318 4660 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232327 4660 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232333 4660 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232338 4660 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232344 4660 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232350 4660 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232355 4660 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232361 4660 flags.go:64] FLAG: --cgroup-root="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232365 4660 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232371 4660 flags.go:64] FLAG: --client-ca-file="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232379 4660 flags.go:64] FLAG: --cloud-config="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232391 4660 flags.go:64] FLAG: --cloud-provider="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232400 4660 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232409 4660 flags.go:64] FLAG: --cluster-domain="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232415 4660 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232424 4660 flags.go:64] FLAG: --config-dir="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232429 4660 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232436 4660 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232445 4660 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232450 4660 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232456 4660 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232462 4660 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232469 4660 flags.go:64] FLAG: --contention-profiling="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232476 4660 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232482 4660 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232488 4660 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232495 4660 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232505 4660 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232511 4660 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232517 4660 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232524 4660 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232530 4660 flags.go:64] FLAG: --enable-server="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.232536 4660 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233296 4660 flags.go:64] FLAG: --event-burst="100" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233302 4660 flags.go:64] FLAG: --event-qps="50" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233307 4660 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233313 4660 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233320 4660 flags.go:64] FLAG: --eviction-hard="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233335 4660 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233340 4660 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233345 4660 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233351 4660 flags.go:64] FLAG: --eviction-soft="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233356 4660 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233360 4660 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233366 4660 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233371 4660 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233377 4660 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233382 4660 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233386 4660 flags.go:64] FLAG: --feature-gates="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233393 4660 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233398 4660 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233405 4660 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233414 4660 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233420 4660 flags.go:64] FLAG: --healthz-port="10248" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233425 4660 flags.go:64] FLAG: --help="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233431 4660 flags.go:64] FLAG: --hostname-override="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233435 4660 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233441 4660 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233447 4660 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233453 4660 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233459 4660 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233465 4660 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233470 4660 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233477 4660 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233482 4660 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233489 4660 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233495 4660 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233502 4660 flags.go:64] FLAG: --kube-reserved="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233506 4660 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233511 4660 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233517 4660 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233522 4660 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233527 4660 flags.go:64] FLAG: --lock-file="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233531 4660 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233536 4660 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233542 4660 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233552 4660 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233557 4660 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233562 4660 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233567 4660 flags.go:64] FLAG: --logging-format="text" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233572 4660 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233577 4660 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233581 4660 flags.go:64] FLAG: --manifest-url="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233586 4660 flags.go:64] FLAG: --manifest-url-header="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233595 4660 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233600 4660 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233608 4660 flags.go:64] FLAG: --max-pods="110" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233613 4660 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233618 4660 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233623 4660 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233628 4660 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233633 4660 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233638 4660 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233643 4660 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233667 4660 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233677 4660 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233683 4660 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233709 4660 flags.go:64] FLAG: --pod-cidr="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233714 4660 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233724 4660 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233728 4660 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233734 4660 flags.go:64] FLAG: --pods-per-core="0" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233739 4660 flags.go:64] FLAG: --port="10250" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233744 4660 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233749 4660 flags.go:64] FLAG: --provider-id="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233754 4660 flags.go:64] FLAG: --qos-reserved="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233758 4660 flags.go:64] FLAG: --read-only-port="10255" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233763 4660 flags.go:64] FLAG: --register-node="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233767 4660 flags.go:64] FLAG: --register-schedulable="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233772 4660 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233783 4660 flags.go:64] FLAG: --registry-burst="10" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233788 4660 flags.go:64] FLAG: --registry-qps="5" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233793 4660 flags.go:64] FLAG: --reserved-cpus="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233798 4660 flags.go:64] FLAG: --reserved-memory="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233804 4660 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233809 4660 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233814 4660 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233819 4660 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233823 4660 flags.go:64] FLAG: --runonce="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233827 4660 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233832 4660 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233836 4660 flags.go:64] FLAG: --seccomp-default="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233841 4660 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233847 4660 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233855 4660 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233861 4660 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233865 4660 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233870 4660 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233876 4660 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233881 4660 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233886 4660 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233891 4660 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233896 4660 flags.go:64] FLAG: --system-cgroups="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233900 4660 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233909 4660 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233914 4660 flags.go:64] FLAG: --tls-cert-file="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233919 4660 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233925 4660 flags.go:64] FLAG: --tls-min-version="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233930 4660 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233934 4660 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233939 4660 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233943 4660 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233948 4660 flags.go:64] FLAG: --v="2" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233957 4660 flags.go:64] FLAG: --version="false" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233964 4660 flags.go:64] FLAG: --vmodule="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233971 4660 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.233976 4660 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234208 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234217 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234223 4660 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234228 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234232 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234237 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234244 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234250 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234256 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234260 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234284 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234292 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234299 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234308 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234316 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234324 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234329 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234334 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234340 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234346 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234352 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234358 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234363 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234368 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234373 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234377 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234382 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234388 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234392 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234397 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234402 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234407 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234412 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234416 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234422 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234428 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234434 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234439 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234443 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234449 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234454 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234458 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234463 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234470 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234478 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234484 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234489 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234494 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234500 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234505 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234509 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234514 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234519 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234523 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234528 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234532 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234536 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234540 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234545 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234549 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234553 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234557 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234562 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234567 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234572 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234577 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234581 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234586 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234590 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234596 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.234601 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.234617 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.244120 4660 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.244220 4660 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244338 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244404 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244451 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244497 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244541 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244594 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244639 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244682 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244765 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244815 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244858 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244900 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244950 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.244995 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245042 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245086 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245131 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245179 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245223 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245270 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245314 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245357 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245405 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245482 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245527 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245574 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245617 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245659 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245726 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245778 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245830 4660 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245879 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245926 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.245973 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246020 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246064 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246107 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246154 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246198 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246245 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246290 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246331 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246372 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246414 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246455 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246503 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246554 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246663 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246741 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246793 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246837 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246885 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246929 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.246971 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247014 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247063 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247107 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247148 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247195 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247239 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247285 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247328 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247370 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247412 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247453 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247501 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247551 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247595 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247638 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247682 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.247773 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.247825 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248083 4660 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248139 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248184 4660 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248414 4660 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248457 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248500 4660 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248551 4660 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248606 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248654 4660 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248719 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248778 4660 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248822 4660 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248864 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248914 4660 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.248963 4660 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249007 4660 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249049 4660 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249090 4660 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249132 4660 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249174 4660 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249225 4660 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249424 4660 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249468 4660 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249510 4660 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249551 4660 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249598 4660 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249641 4660 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249701 4660 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249756 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249801 4660 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249843 4660 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249885 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249927 4660 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.249969 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250024 4660 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250072 4660 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250115 4660 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250159 4660 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250203 4660 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250246 4660 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250296 4660 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250344 4660 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250389 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250432 4660 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250474 4660 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250515 4660 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250557 4660 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250600 4660 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250648 4660 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250712 4660 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250765 4660 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250809 4660 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250852 4660 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250894 4660 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250937 4660 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.250993 4660 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251039 4660 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251082 4660 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251124 4660 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251166 4660 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251208 4660 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251256 4660 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251303 4660 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251348 4660 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251390 4660 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251433 4660 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251474 4660 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251524 4660 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251572 4660 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251616 4660 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.251673 4660 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.251745 4660 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.252687 4660 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.256728 4660 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.256887 4660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.258372 4660 server.go:997] "Starting client certificate rotation" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.258461 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.260399 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-15 17:41:48.971008416 +0000 UTC Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.260560 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.287517 4660 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.290360 4660 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.292034 4660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.316066 4660 log.go:25] "Validated CRI v1 runtime API" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.349552 4660 log.go:25] "Validated CRI v1 image API" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.351012 4660 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.356205 4660 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-11-59-51-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.356251 4660 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.368638 4660 manager.go:217] Machine: {Timestamp:2026-01-29 12:06:03.36510403 +0000 UTC m=+0.588046182 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:84b5dbbc-d752-4d09-af33-1495e13b6eab BootID:c98432e5-da5f-42fb-aa9a-8b9962bbfbea Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:18:f3:66 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:18:f3:66 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fc:df:a8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b5:e5:69 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b4:4d:e2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a1:be:fc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:14:a6:f2:27:f4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:44:8f:81:84:68 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.368814 4660 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.368908 4660 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.371023 4660 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.371176 4660 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.371210 4660 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.371386 4660 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.371396 4660 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.372243 4660 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.373087 4660 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.373231 4660 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.373299 4660 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.377481 4660 kubelet.go:418] "Attempting to sync node with API server" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.377501 4660 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.377516 4660 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.377526 4660 kubelet.go:324] "Adding apiserver pod source" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.377537 4660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.381882 4660 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.382746 4660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.384114 4660 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.384461 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.384521 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.384769 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.384875 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386362 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386383 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386390 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386396 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386406 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386413 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386419 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386431 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386439 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386447 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386473 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.386480 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.389683 4660 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.390016 4660 server.go:1280] "Started kubelet" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.390968 4660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.390995 4660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.391756 4660 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:06:03 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.396324 4660 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.397441 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.397097 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.146:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3230b700c1c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:06:03.389993416 +0000 UTC m=+0.612935538,LastTimestamp:2026-01-29 12:06:03.389993416 +0000 UTC m=+0.612935538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.399432 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.400502 4660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.400612 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:26:31.786648341 +0000 UTC Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.400779 4660 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.400804 4660 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.400955 4660 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.401046 4660 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.402737 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.403065 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.405334 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="200ms" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.405458 4660 factory.go:153] Registering CRI-O factory Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.405775 4660 factory.go:221] Registration of the crio container factory successfully Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.406033 4660 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.406186 4660 factory.go:55] Registering systemd factory Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.406319 4660 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.406499 4660 factory.go:103] Registering Raw factory Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.406727 4660 manager.go:1196] Started watching for new ooms in manager Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.408173 4660 manager.go:319] Starting recovery of all containers Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409669 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409748 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409765 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409779 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409793 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409805 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409819 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409832 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.409846 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415742 4660 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415814 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415837 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415853 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415873 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415893 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415908 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415924 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415940 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415984 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.415997 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416010 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416024 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416037 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416050 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416075 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416091 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416104 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416119 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416156 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416172 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416196 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416211 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416225 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416239 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416254 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416269 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416285 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416299 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416314 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416329 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416344 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416359 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416374 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416388 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416402 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416416 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416430 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416446 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416461 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416476 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416489 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416503 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416517 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416538 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416553 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416571 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416587 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416603 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416617 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416632 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416646 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416663 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416678 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416711 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416726 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416740 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416753 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416766 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416783 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416797 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416811 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416825 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416839 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416854 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416867 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416879 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416892 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416905 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416919 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416932 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416946 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416959 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416972 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416985 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.416998 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417011 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417025 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417039 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417053 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417067 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417080 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417093 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417107 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417123 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417137 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417153 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417166 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417179 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417192 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417204 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417221 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417236 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417249 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417264 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417279 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417306 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417322 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417336 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417350 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417364 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417377 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417391 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417405 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417418 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417433 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417447 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417463 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417477 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417490 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417503 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417516 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417532 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417546 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417559 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417573 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417588 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417602 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417616 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417629 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417643 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417656 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417668 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417684 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417717 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417729 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417741 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417753 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417769 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417782 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417796 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417811 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417824 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417836 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417848 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417859 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417870 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417882 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417894 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417907 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417918 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417930 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417943 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417955 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417967 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417979 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.417989 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418001 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418015 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418027 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418041 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418055 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418068 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418081 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418092 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418107 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418120 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418131 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418143 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418154 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418166 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418177 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418189 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418200 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418211 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418222 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418233 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418244 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418255 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418265 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418278 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418295 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418306 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418317 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418327 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418338 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418350 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418362 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418373 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418383 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418397 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418409 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418419 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418431 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418442 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418465 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418477 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418502 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418515 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418527 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418539 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418552 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418567 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418580 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418592 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418604 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418617 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418629 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418641 4660 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418653 4660 reconstruct.go:97] "Volume reconstruction finished" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.418662 4660 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.435339 4660 manager.go:324] Recovery completed Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.450546 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.452362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.452393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.452405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.456351 4660 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.456365 4660 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.456387 4660 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.466440 4660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.468494 4660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.468532 4660 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.468558 4660 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.468764 4660 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.470813 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.470873 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.472868 4660 policy_none.go:49] "None policy: Start" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.473659 4660 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.473683 4660 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.501835 4660 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.527600 4660 manager.go:334] "Starting Device Plugin manager" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.527785 4660 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.527802 4660 server.go:79] "Starting device plugin registration server" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.528272 4660 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.528290 4660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.528832 4660 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.528918 4660 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.528930 4660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.538369 4660 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.569988 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.570059 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571269 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571491 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571520 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571805 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.571832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572015 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572181 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572093 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572224 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572726 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572822 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572927 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.572956 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573304 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573394 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573521 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573546 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573947 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.573956 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.574025 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.574049 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.574118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.574133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.574142 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.575866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.575967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.575980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.607128 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="400ms" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622482 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622537 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622561 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622587 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622609 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622656 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622848 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622882 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622904 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622929 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622951 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622973 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.622988 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.623021 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.623182 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.628444 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.629988 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.630048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.630061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.630099 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.630627 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.146:6443: connect: connection refused" node="crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725046 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725106 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725132 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725157 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725184 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725203 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725225 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725246 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725270 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725292 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725317 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725353 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725358 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725280 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725481 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725516 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725504 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725482 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725574 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725583 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725356 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725606 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725630 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725639 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725441 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725536 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725676 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.725768 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.831249 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.833138 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.833196 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.833211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.833235 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:03 crc kubenswrapper[4660]: E0129 12:06:03.833794 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.146:6443: connect: connection refused" node="crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.900220 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.920932 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.929533 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.945564 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.950452 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a581ca539ca9f183863a394359aa5997387cf81d5ea038691fd2b71c82ecdafe WatchSource:0}: Error finding container a581ca539ca9f183863a394359aa5997387cf81d5ea038691fd2b71c82ecdafe: Status 404 returned error can't find the container with id a581ca539ca9f183863a394359aa5997387cf81d5ea038691fd2b71c82ecdafe Jan 29 12:06:03 crc kubenswrapper[4660]: I0129 12:06:03.950618 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.957165 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-fbf48568e676b516476c634ded54a30d18b538494b02b57d4cb55f071533d676 WatchSource:0}: Error finding container fbf48568e676b516476c634ded54a30d18b538494b02b57d4cb55f071533d676: Status 404 returned error can't find the container with id fbf48568e676b516476c634ded54a30d18b538494b02b57d4cb55f071533d676 Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.960877 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-193e13de7c6cbdd0b6fbe44c1f2cb6d27544fc535517ad6b22138118b2ca7ef2 WatchSource:0}: Error finding container 193e13de7c6cbdd0b6fbe44c1f2cb6d27544fc535517ad6b22138118b2ca7ef2: Status 404 returned error can't find the container with id 193e13de7c6cbdd0b6fbe44c1f2cb6d27544fc535517ad6b22138118b2ca7ef2 Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.961835 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a702c6596b419f3148114365093914280d144ad630a2563e23d9d550455a8fea WatchSource:0}: Error finding container a702c6596b419f3148114365093914280d144ad630a2563e23d9d550455a8fea: Status 404 returned error can't find the container with id a702c6596b419f3148114365093914280d144ad630a2563e23d9d550455a8fea Jan 29 12:06:03 crc kubenswrapper[4660]: W0129 12:06:03.970942 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-da245dd21e3b357e22ddd4d6ab7bce757a1507025443967a807beac364387cd2 WatchSource:0}: Error finding container da245dd21e3b357e22ddd4d6ab7bce757a1507025443967a807beac364387cd2: Status 404 returned error can't find the container with id da245dd21e3b357e22ddd4d6ab7bce757a1507025443967a807beac364387cd2 Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.008393 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="800ms" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.234324 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.235359 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.235399 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.235410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.235433 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.235780 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.146:6443: connect: connection refused" node="crc" Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.398996 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.401151 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:09:50.825709589 +0000 UTC Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.473239 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"193e13de7c6cbdd0b6fbe44c1f2cb6d27544fc535517ad6b22138118b2ca7ef2"} Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.473794 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbf48568e676b516476c634ded54a30d18b538494b02b57d4cb55f071533d676"} Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.475420 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a581ca539ca9f183863a394359aa5997387cf81d5ea038691fd2b71c82ecdafe"} Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.477038 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"da245dd21e3b357e22ddd4d6ab7bce757a1507025443967a807beac364387cd2"} Jan 29 12:06:04 crc kubenswrapper[4660]: I0129 12:06:04.478784 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a702c6596b419f3148114365093914280d144ad630a2563e23d9d550455a8fea"} Jan 29 12:06:04 crc kubenswrapper[4660]: W0129 12:06:04.794665 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.795075 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.809845 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="1.6s" Jan 29 12:06:04 crc kubenswrapper[4660]: W0129 12:06:04.825579 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.825726 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:04 crc kubenswrapper[4660]: W0129 12:06:04.914027 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.914115 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:04 crc kubenswrapper[4660]: W0129 12:06:04.969732 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:04 crc kubenswrapper[4660]: E0129 12:06:04.969830 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.036459 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.037789 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.037861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.037885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.037924 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:05 crc kubenswrapper[4660]: E0129 12:06:05.038614 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.146:6443: connect: connection refused" node="crc" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.398427 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.401558 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:51:38.282116991 +0000 UTC Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.483404 4660 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed" exitCode=0 Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.483526 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.483586 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.484784 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.484839 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.484862 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486230 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486308 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486355 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486381 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486406 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.486987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.487965 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.488241 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb" exitCode=0 Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.488637 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.489037 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.489415 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.489432 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.489441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: E0129 12:06:05.489518 4660 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.491102 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a" exitCode=0 Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.491179 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.491278 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.492558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.492590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.492602 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.498544 4660 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556" exitCode=0 Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.498590 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556"} Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.498624 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.499764 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.499808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.499825 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.502030 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.503646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.503672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4660]: I0129 12:06:05.503683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.296766 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.304484 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.398320 4660 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.402549 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:24:01.954276036 +0000 UTC Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.402652 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:06 crc kubenswrapper[4660]: E0129 12:06:06.411291 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="3.2s" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.502639 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.502714 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.502723 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.502739 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.503390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.503419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.503429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.504580 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649" exitCode=0 Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.504616 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.504670 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.505548 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.505575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.505585 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.508344 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.508379 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.508392 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.508402 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.509800 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.509810 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"675bac3555377a9cc0cafa3e0bdb76d7a025de4cda4a7aac3b160a66d16f8343"} Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.509843 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.509850 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510900 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510933 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.510987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.641143 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.642332 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.642372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.642385 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4660]: I0129 12:06:06.642410 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:06 crc kubenswrapper[4660]: E0129 12:06:06.642840 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.146:6443: connect: connection refused" node="crc" Jan 29 12:06:06 crc kubenswrapper[4660]: W0129 12:06:06.696037 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:06 crc kubenswrapper[4660]: E0129 12:06:06.696131 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:07 crc kubenswrapper[4660]: W0129 12:06:07.200152 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:07 crc kubenswrapper[4660]: W0129 12:06:07.200162 4660 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.146:6443: connect: connection refused Jan 29 12:06:07 crc kubenswrapper[4660]: E0129 12:06:07.200229 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:07 crc kubenswrapper[4660]: E0129 12:06:07.200267 4660 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.146:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.403609 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:04:29.812442231 +0000 UTC Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.516247 4660 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659" exitCode=0 Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.516359 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.516482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659"} Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.517161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.517234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.517253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521051 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8"} Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521180 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521217 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521187 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521260 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.521261 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.524422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.524471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.524494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.526183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.526224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.526246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527298 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527326 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4660]: I0129 12:06:07.527492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.404649 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:05:27.992099399 +0000 UTC Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527407 4660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527423 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba"} Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527492 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c"} Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527507 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc"} Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527517 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b"} Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.527457 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.528810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.528846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4660]: I0129 12:06:08.528858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.405724 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:37:39.090820324 +0000 UTC Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.533842 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b"} Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.534079 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.535153 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.535208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.535222 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.675971 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.743242 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.743521 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.744938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.744969 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.744980 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.843986 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.845747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.845812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.845830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4660]: I0129 12:06:09.845869 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.406097 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:12:42.59371733 +0000 UTC Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.535920 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.536920 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.536968 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.536978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.627120 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.627422 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.629081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.629126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4660]: I0129 12:06:10.629135 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.406822 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:50:30.174017259 +0000 UTC Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.723511 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.723779 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.725278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.725341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4660]: I0129 12:06:11.725356 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.086155 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.407746 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:16:43.418732549 +0000 UTC Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.540604 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.541574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.541622 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4660]: I0129 12:06:12.541636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4660]: I0129 12:06:13.408383 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:22:09.232949301 +0000 UTC Jan 29 12:06:13 crc kubenswrapper[4660]: E0129 12:06:13.538634 4660 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.108642 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.108850 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.110062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.110122 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.110136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.409333 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:36:48.897633377 +0000 UTC Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.584866 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.585148 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.586631 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.586716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.586774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4660]: I0129 12:06:14.591977 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.409733 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:59:17.827205995 +0000 UTC Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.449986 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.450165 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.451272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.451297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.451306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.547799 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.548812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.548937 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4660]: I0129 12:06:15.548953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4660]: I0129 12:06:16.410011 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:01:29.301925472 +0000 UTC Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.196804 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.196884 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.213680 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.213777 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.410510 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:21:50.271433906 +0000 UTC Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.585574 4660 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:06:17 crc kubenswrapper[4660]: I0129 12:06:17.585663 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:06:18 crc kubenswrapper[4660]: I0129 12:06:18.410979 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:42:27.047139144 +0000 UTC Jan 29 12:06:19 crc kubenswrapper[4660]: I0129 12:06:19.411624 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:34:04.991211222 +0000 UTC Jan 29 12:06:20 crc kubenswrapper[4660]: I0129 12:06:20.411955 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:45:06.04241114 +0000 UTC Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.412973 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:34:18.725431084 +0000 UTC Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.727532 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.727731 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.728821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.728863 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.728875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:21 crc kubenswrapper[4660]: I0129 12:06:21.732169 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.146843 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.149485 4660 trace.go:236] Trace[1731440544]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:06:07.539) (total time: 14609ms): Jan 29 12:06:22 crc kubenswrapper[4660]: Trace[1731440544]: ---"Objects listed" error: 14609ms (12:06:22.149) Jan 29 12:06:22 crc kubenswrapper[4660]: Trace[1731440544]: [14.609734477s] [14.609734477s] END Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.149571 4660 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.151553 4660 trace.go:236] Trace[719192654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:06:10.957) (total time: 11193ms): Jan 29 12:06:22 crc kubenswrapper[4660]: Trace[719192654]: ---"Objects listed" error: 11193ms (12:06:22.151) Jan 29 12:06:22 crc kubenswrapper[4660]: Trace[719192654]: [11.193957092s] [11.193957092s] END Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.151622 4660 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.152780 4660 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.152901 4660 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.153036 4660 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.153062 4660 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.186317 4660 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.207523 4660 csr.go:261] certificate signing request csr-tknzv is approved, waiting to be issued Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.214192 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35108->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.214644 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35108->192.168.126.11:17697: read: connection reset by peer" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.214227 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35098->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.217044 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35098->192.168.126.11:17697: read: connection reset by peer" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.217699 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.217766 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.221051 4660 csr.go:257] certificate signing request csr-tknzv is issued Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.388276 4660 apiserver.go:52] "Watching apiserver" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.391360 4660 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.391649 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.391992 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.392102 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.392247 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.392345 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.392481 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.392542 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.392831 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.393102 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.393161 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.396619 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.396717 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398282 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398293 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398372 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398395 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398466 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.398755 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.405078 4660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.406422 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.413744 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:19:06.104570169 +0000 UTC Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.425186 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6c7n9"] Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.425598 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.429664 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.429819 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.430314 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455084 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455142 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455194 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455262 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455680 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455293 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455820 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.455846 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456231 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456257 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456004 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456718 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456747 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456158 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456451 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457047 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456569 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456620 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456647 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.456940 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457120 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457148 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457196 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457225 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457269 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457296 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457324 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457369 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457398 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457736 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457786 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457815 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457862 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457888 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457932 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457963 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458013 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458041 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458090 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458136 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458190 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458222 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458273 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458299 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458348 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458377 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458427 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458460 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458509 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458538 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458617 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458643 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457196 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457353 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457389 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457573 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457789 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457802 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.457987 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458160 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458199 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458370 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458844 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458527 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458874 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.458676 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459111 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459289 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459304 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459389 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459556 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459645 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.459789 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460049 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460101 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460533 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460760 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460797 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460815 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460333 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.460975 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461030 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461058 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461175 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461212 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461260 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461286 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461332 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461359 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461384 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461433 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461459 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461511 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461535 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461581 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461604 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461627 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461670 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461722 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461743 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461763 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461856 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461877 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461893 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461991 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462011 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462029 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462065 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462083 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462106 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462154 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462179 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462627 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462658 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462723 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462749 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462854 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462882 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462902 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462946 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462968 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463006 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463029 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463091 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463121 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463144 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463190 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463213 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461666 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.461946 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462126 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.462534 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463244 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463252 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463542 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463763 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.463790 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464128 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464153 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464266 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464298 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464333 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464477 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464549 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.464812 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.465044 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.465259 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.465402 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.466997 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.467128 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.467426 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.472324 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.472664 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.472860 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.472936 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473196 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473236 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473268 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473297 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473323 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473345 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473374 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473514 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473556 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473585 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473612 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473654 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473681 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473730 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473754 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473776 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473800 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.473824 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474131 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474232 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474269 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474298 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474323 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474348 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474374 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474399 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474459 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474485 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474535 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475230 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475287 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475316 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475313 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475338 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475409 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475446 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475478 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475529 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475560 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475721 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475759 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475790 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475858 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475904 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475931 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.475978 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476003 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476029 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476081 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476109 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476156 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476180 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476227 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476256 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476302 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476329 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476358 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476412 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476436 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476483 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476511 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476556 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476579 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476621 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476647 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476670 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476724 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476749 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476793 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476819 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476844 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476887 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476908 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476955 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.476982 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477026 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477053 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477076 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477118 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477144 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477185 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477211 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477234 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477349 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477375 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477418 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477445 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477468 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477514 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477539 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477585 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477614 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477657 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477710 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477739 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477763 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477811 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477838 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477889 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477916 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477959 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.477989 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478058 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478090 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478140 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478166 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478251 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478303 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478333 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478388 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478435 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478462 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478486 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478528 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.478555 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.479652 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.479766 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.479846 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.480219 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.480875 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.481764 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.482085 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.482178 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.482499 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.474921 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.484897 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.487126 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.488225 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.488559 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.489306 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.489926 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.489957 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.491333 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.491864 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.492244 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.492521 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.492719 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.493299 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494095 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494449 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494520 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494745 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494790 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494867 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.494994 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.495326 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.495349 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.495816 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.496369 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.497187 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.497633 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.497652 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.497800 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.498154 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.498331 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.498614 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.500196 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.500429 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.500826 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.501338 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.501792 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.502185 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.502600 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.503063 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.506133 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.506533 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.506561 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.506742 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507163 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507169 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507244 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507403 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507479 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507668 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507772 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.507876 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.512899 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.513196 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.513639 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.513747 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.514093 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.514280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.514834 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.515929 4660 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.516423 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.517019 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.518209 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.518771 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.518667 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.518980 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.519918 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.520280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.520422 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.520723 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.520974 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.523130 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.523614 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524003 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524296 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524612 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524633 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524933 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.524945 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.525179 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.525428 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.525528 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.525802 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526093 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526240 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526573 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526624 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526850 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.526917 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.528578 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.529148 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.529941 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.530343 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531136 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531172 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531329 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531433 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531453 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531766 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.531774 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532016 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532030 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.532194 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:23.032155011 +0000 UTC m=+20.255097133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532394 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532455 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.532475 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:23.0324675 +0000 UTC m=+20.255409622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.532548 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:23.032542513 +0000 UTC m=+20.255484645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532628 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532872 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.532927 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533176 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533390 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533652 4660 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533622 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533718 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533729 4660 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533738 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533748 4660 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533757 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533766 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533775 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533799 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533810 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533820 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533828 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533837 4660 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533845 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533853 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533876 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533884 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533894 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533905 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533915 4660 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533924 4660 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533934 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533957 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533966 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533974 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533983 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.533992 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534001 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534010 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534032 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534040 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534049 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534057 4660 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534065 4660 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534074 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534089 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534113 4660 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534123 4660 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534134 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534142 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534151 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534159 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534168 4660 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534191 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534201 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534209 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534217 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534226 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534235 4660 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534243 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534266 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534275 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534283 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534291 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534301 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534310 4660 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534318 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534343 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534352 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534360 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534368 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534377 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534384 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.534713 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.535679 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.535857 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.517733 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.536194 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.538875 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.539127 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.539948 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.542404 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.542413 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.542935 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.545457 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.549413 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.556052 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.556280 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.557561 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.557774 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.558798 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.558951 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.558975 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.558990 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.559069 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:23.05905056 +0000 UTC m=+20.281992692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.569507 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.569850 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.571092 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.572493 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.574926 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.575380 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.575685 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.576073 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.576099 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.576110 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:22 crc kubenswrapper[4660]: E0129 12:06:22.576154 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:23.076136005 +0000 UTC m=+20.299078137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.580218 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.582749 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8" exitCode=255 Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.582822 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8"} Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.595498 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.597002 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.601985 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.610664 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.620830 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637089 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcg9w\" (UniqueName: \"kubernetes.io/projected/485a51de-d434-4747-b63f-48c7486cefb3-kube-api-access-vcg9w\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637148 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637181 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637243 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/485a51de-d434-4747-b63f-48c7486cefb3-hosts-file\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637289 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637303 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637318 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637329 4660 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637339 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637350 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637361 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637372 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637383 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637394 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637405 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637415 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637429 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637441 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637453 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637463 4660 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637473 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637484 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637494 4660 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637504 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637515 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637525 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637535 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637567 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637579 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637590 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637599 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637610 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637620 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637632 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637642 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637652 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637663 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637673 4660 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637685 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637716 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637727 4660 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637766 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637779 4660 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637791 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637806 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637816 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637827 4660 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637838 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637406 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637867 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637889 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637902 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637916 4660 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637927 4660 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637940 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637950 4660 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637962 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637972 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637984 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.637995 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638007 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638019 4660 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638030 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638042 4660 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638052 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638061 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638070 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638078 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638089 4660 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638098 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638138 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638156 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638167 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638178 4660 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638191 4660 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638203 4660 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638215 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638226 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638236 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638247 4660 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638256 4660 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638266 4660 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638276 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638287 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638298 4660 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638308 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638318 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638329 4660 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638341 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638351 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638363 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638373 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638384 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638395 4660 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638405 4660 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638415 4660 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638427 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638438 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638448 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638459 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638469 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638480 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638491 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638502 4660 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638512 4660 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638523 4660 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638534 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638544 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638555 4660 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638565 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638575 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638585 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638598 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638610 4660 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638622 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638633 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638640 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638643 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638672 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638682 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638715 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638723 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638731 4660 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638740 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638748 4660 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638756 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638764 4660 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638773 4660 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638783 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638792 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638800 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638808 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638817 4660 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638827 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638834 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638842 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.638306 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.651641 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.658335 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.669147 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.684929 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.697203 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.708856 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.711458 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.719264 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.719885 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:06:22 crc kubenswrapper[4660]: W0129 12:06:22.727747 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-148786618d56eb4ed29d08633fb15d9fe739006e3213e4dfff3604b783f25a2a WatchSource:0}: Error finding container 148786618d56eb4ed29d08633fb15d9fe739006e3213e4dfff3604b783f25a2a: Status 404 returned error can't find the container with id 148786618d56eb4ed29d08633fb15d9fe739006e3213e4dfff3604b783f25a2a Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.730341 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.740201 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/485a51de-d434-4747-b63f-48c7486cefb3-hosts-file\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.740264 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcg9w\" (UniqueName: \"kubernetes.io/projected/485a51de-d434-4747-b63f-48c7486cefb3-kube-api-access-vcg9w\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.740598 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/485a51de-d434-4747-b63f-48c7486cefb3-hosts-file\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.742404 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.753263 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.762414 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.762603 4660 scope.go:117] "RemoveContainer" containerID="68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8" Jan 29 12:06:22 crc kubenswrapper[4660]: I0129 12:06:22.762837 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcg9w\" (UniqueName: \"kubernetes.io/projected/485a51de-d434-4747-b63f-48c7486cefb3-kube-api-access-vcg9w\") pod \"node-resolver-6c7n9\" (UID: \"485a51de-d434-4747-b63f-48c7486cefb3\") " pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.039982 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6c7n9" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.041765 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.041877 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.041919 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:24.041895464 +0000 UTC m=+21.264837636 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.041958 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.041972 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.042037 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:24.042016568 +0000 UTC m=+21.264958760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.042078 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.042118 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:24.042108871 +0000 UTC m=+21.265051053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.049432 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod485a51de_d434_4747_b63f_48c7486cefb3.slice/crio-bf4b7dc08169ba6ab5282aa4730a1f5d314e3362b13e241b9f968e68adfb446e WatchSource:0}: Error finding container bf4b7dc08169ba6ab5282aa4730a1f5d314e3362b13e241b9f968e68adfb446e: Status 404 returned error can't find the container with id bf4b7dc08169ba6ab5282aa4730a1f5d314e3362b13e241b9f968e68adfb446e Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.143776 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.143843 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.143963 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.143979 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.143990 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.144022 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.144059 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.144071 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.144034 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:24.144020708 +0000 UTC m=+21.366962840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.144137 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:24.144119751 +0000 UTC m=+21.367061883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.222346 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 12:01:22 +0000 UTC, rotation deadline is 2026-12-15 21:27:23.348507224 +0000 UTC Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.222429 4660 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7689h21m0.126082486s for next certificate rotation Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.243530 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vb4nc"] Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.243949 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.248420 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.248474 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mdfz2"] Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.249069 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.249165 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.249420 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.249712 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.249830 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.256816 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kqctn"] Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.257785 4660 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.257938 4660 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.257966 4660 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.257987 4660 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.258015 4660 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258076 4660 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258108 4660 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.258141 4660 request.go:1255] Unexpected error when reading response body: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258199 4660 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258211 4660 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258229 4660 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.258253 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection" logger="UnhandledError" Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258267 4660 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection Jan 29 12:06:23 crc kubenswrapper[4660]: E0129 12:06:23.258284 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: unexpected error when reading response body. Please retry. Original error: read tcp 38.102.83.146:58942->38.102.83.146:6443: use of closed network connection" logger="UnhandledError" Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258293 4660 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258338 4660 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258373 4660 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258415 4660 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258415 4660 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258464 4660 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258516 4660 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258637 4660 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.258893 4660 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.259030 4660 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.257796 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.280770 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.285513 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345328 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-bin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345378 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4dr\" (UniqueName: \"kubernetes.io/projected/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-kube-api-access-6n4dr\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345408 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-system-cni-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345429 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-os-release\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345448 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345487 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345507 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345526 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cni-binary-copy\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345547 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6zkc\" (UniqueName: \"kubernetes.io/projected/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-kube-api-access-x6zkc\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345566 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-daemon-config\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345589 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgv2\" (UniqueName: \"kubernetes.io/projected/e956e367-0df8-44cc-b87e-a7ed32942593-kube-api-access-2dgv2\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345611 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-etc-kubernetes\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345631 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-system-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345649 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-k8s-cni-cncf-io\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345671 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-hostroot\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345741 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345762 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-cnibin\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345781 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cnibin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345801 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-binary-copy\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345822 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-multus\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345854 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-rootfs\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345875 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-proxy-tls\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345893 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-os-release\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345914 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-conf-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345935 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-multus-certs\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345955 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-netns\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.345977 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-socket-dir-parent\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.346000 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-kubelet\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.414131 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:11:38.751052616 +0000 UTC Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446808 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-multus-certs\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446863 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-rootfs\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446893 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-proxy-tls\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446920 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-os-release\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446950 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-conf-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446977 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-netns\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.446974 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-multus-certs\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447001 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-socket-dir-parent\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447096 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-kubelet\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447119 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-bin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n4dr\" (UniqueName: \"kubernetes.io/projected/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-kube-api-access-6n4dr\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-system-cni-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447199 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-os-release\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447216 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447243 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447261 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cni-binary-copy\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447306 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448123 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6zkc\" (UniqueName: \"kubernetes.io/projected/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-kube-api-access-x6zkc\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448137 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-os-release\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447705 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-conf-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447730 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-netns\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447759 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-system-cni-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447783 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-kubelet\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447804 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-bin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447864 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448145 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-daemon-config\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448297 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dgv2\" (UniqueName: \"kubernetes.io/projected/e956e367-0df8-44cc-b87e-a7ed32942593-kube-api-access-2dgv2\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448327 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-etc-kubernetes\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448351 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-system-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-k8s-cni-cncf-io\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448398 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-hostroot\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448441 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448465 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-cnibin\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448497 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cni-binary-copy\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448544 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cnibin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448507 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-cnibin\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448572 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-run-k8s-cni-cncf-io\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448586 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-binary-copy\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448610 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-multus\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448666 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-host-var-lib-cni-multus\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448668 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-daemon-config\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447307 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-multus-socket-dir-parent\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448772 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-hostroot\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448804 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-etc-kubernetes\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448835 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-cnibin\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448847 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-system-cni-dir\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.447536 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-rootfs\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.448907 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-os-release\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.449530 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.449602 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e956e367-0df8-44cc-b87e-a7ed32942593-cni-binary-copy\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.449784 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e956e367-0df8-44cc-b87e-a7ed32942593-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.470252 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n4dr\" (UniqueName: \"kubernetes.io/projected/f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3-kube-api-access-6n4dr\") pod \"multus-vb4nc\" (UID: \"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\") " pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.473479 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.474324 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.476007 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.476895 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.478332 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.479314 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.480390 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.482089 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.483034 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.485662 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.486252 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.487945 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.488408 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.488910 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.489829 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.490227 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dgv2\" (UniqueName: \"kubernetes.io/projected/e956e367-0df8-44cc-b87e-a7ed32942593-kube-api-access-2dgv2\") pod \"multus-additional-cni-plugins-kqctn\" (UID: \"e956e367-0df8-44cc-b87e-a7ed32942593\") " pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.490344 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.491332 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.491699 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.492226 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.494197 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.494657 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.495625 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.496061 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.500807 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.501289 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.501962 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.503192 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.503669 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.504625 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.505143 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.506016 4660 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.506113 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.507776 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.510614 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.511163 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.513066 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.513830 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.514775 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.515481 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.520615 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.521491 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.522734 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.523399 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.524357 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.524874 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.525865 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.526427 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.527600 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.528133 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.528986 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.529525 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.532140 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.533392 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.534061 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.559365 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vb4nc" Jan 29 12:06:23 crc kubenswrapper[4660]: W0129 12:06:23.582210 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3d2c3c2_a0ef_4204_b7c1_533e7ee29ee3.slice/crio-6abc90dfa35263d60b630f4fa0f71ebb485357f4610b1874157a274ec22bf745 WatchSource:0}: Error finding container 6abc90dfa35263d60b630f4fa0f71ebb485357f4610b1874157a274ec22bf745: Status 404 returned error can't find the container with id 6abc90dfa35263d60b630f4fa0f71ebb485357f4610b1874157a274ec22bf745 Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.582638 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kqctn" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.604006 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6c7n9" event={"ID":"485a51de-d434-4747-b63f-48c7486cefb3","Type":"ContainerStarted","Data":"7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.604053 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6c7n9" event={"ID":"485a51de-d434-4747-b63f-48c7486cefb3","Type":"ContainerStarted","Data":"bf4b7dc08169ba6ab5282aa4730a1f5d314e3362b13e241b9f968e68adfb446e"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.605784 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.605821 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"148786618d56eb4ed29d08633fb15d9fe739006e3213e4dfff3604b783f25a2a"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.609873 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9159582fc032e0f112807c8e2777e19ec60e5a3fc2df42e76132b65ed3344356"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.622984 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.623041 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.623051 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"276ff1ccb6e7e72da4ab9b6c9ba6c8a8290ce1911ff605455c95bf414f5a0180"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.628641 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.630065 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7"} Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.630756 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.678970 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-clbcs"] Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.680608 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.685966 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.686298 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.686551 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.686591 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.686736 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.686961 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.687388 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750585 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750620 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750638 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750672 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750709 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750724 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750739 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750756 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750773 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750794 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750809 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750825 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750852 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.750896 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751003 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751152 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751224 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751263 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-267kg\" (UniqueName: \"kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751286 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.751306 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852830 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-267kg\" (UniqueName: \"kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852880 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852901 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852924 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852943 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.852964 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853004 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853038 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853063 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853084 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853104 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853127 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853191 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853214 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853237 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853260 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853302 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853331 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853360 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853379 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853436 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853666 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853732 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853669 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853757 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853775 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853782 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853807 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853830 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853853 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853881 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853905 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.853317 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.854018 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.854072 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.854381 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.854419 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.856844 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:23 crc kubenswrapper[4660]: I0129 12:06:23.870870 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-267kg\" (UniqueName: \"kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg\") pod \"ovnkube-node-clbcs\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.004721 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.055081 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.055176 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.055203 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.055339 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.055388 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:26.05537439 +0000 UTC m=+23.278316522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.055734 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:26.055726291 +0000 UTC m=+23.278668423 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.055770 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.055790 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:26.055784352 +0000 UTC m=+23.278726484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.071881 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.155951 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.156223 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156132 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156436 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156300 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156596 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156624 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156515 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156729 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:26.15670472 +0000 UTC m=+23.379646892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.156751 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:26.156745051 +0000 UTC m=+23.379687283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.160283 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.194289 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 12:06:24 crc kubenswrapper[4660]: W0129 12:06:24.210053 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39de46a2_9cba_4331_aab2_697f0337563c.slice/crio-8afad13a6c9d8b36803471e600bcd42714994724f588753cc69cdfa66673e5dd WatchSource:0}: Error finding container 8afad13a6c9d8b36803471e600bcd42714994724f588753cc69cdfa66673e5dd: Status 404 returned error can't find the container with id 8afad13a6c9d8b36803471e600bcd42714994724f588753cc69cdfa66673e5dd Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.250714 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.260956 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.261086 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.261431 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.268918 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.269622 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.273828 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-proxy-tls\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.284472 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.309660 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.324941 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.340303 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.355492 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.359488 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.370819 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.371103 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.375758 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.380932 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6zkc\" (UniqueName: \"kubernetes.io/projected/1d28a7f3-5242-4198-9ea8-6e12d67b4fa8-kube-api-access-x6zkc\") pod \"machine-config-daemon-mdfz2\" (UID: \"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\") " pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.385027 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.400890 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.414370 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:36:21.149692049 +0000 UTC Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.414772 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.429740 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.431200 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.439636 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.452164 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.455025 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.467514 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.469485 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.469504 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.469504 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.469628 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.469748 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.469857 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.472543 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: W0129 12:06:24.482970 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d28a7f3_5242_4198_9ea8_6e12d67b4fa8.slice/crio-fd46147de89f0b9e210173769c3b72fa9b3c2c14759ae4264763b5160ce27173 WatchSource:0}: Error finding container fd46147de89f0b9e210173769c3b72fa9b3c2c14759ae4264763b5160ce27173: Status 404 returned error can't find the container with id fd46147de89f0b9e210173769c3b72fa9b3c2c14759ae4264763b5160ce27173 Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.485525 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.488836 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.506720 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.525720 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.541682 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.562087 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.576303 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.589262 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.598442 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.604585 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.605557 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.608642 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.625218 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.665353 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d" exitCode=0 Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.665521 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.665566 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerStarted","Data":"3fcdeafe1f608fbf8adb48ec31e7650da80573af889fc2169078e09cba27269e"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.666733 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.682321 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.684841 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.684884 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"fd46147de89f0b9e210173769c3b72fa9b3c2c14759ae4264763b5160ce27173"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.691321 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219" exitCode=0 Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.691433 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.691468 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"8afad13a6c9d8b36803471e600bcd42714994724f588753cc69cdfa66673e5dd"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.715985 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerStarted","Data":"222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493"} Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.716052 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerStarted","Data":"6abc90dfa35263d60b630f4fa0f71ebb485357f4610b1874157a274ec22bf745"} Jan 29 12:06:24 crc kubenswrapper[4660]: E0129 12:06:24.745684 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.754500 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.758154 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.784770 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.790451 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.803673 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.834333 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.841199 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-n6ljb"] Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.841789 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.848375 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.848835 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.849153 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.857144 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.857723 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.860889 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.872475 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-host\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.873370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-serviceca\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.873541 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mv4\" (UniqueName: \"kubernetes.io/projected/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-kube-api-access-t5mv4\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.893316 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.910623 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.933936 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.952549 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.968656 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.974428 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mv4\" (UniqueName: \"kubernetes.io/projected/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-kube-api-access-t5mv4\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.974503 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-host\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.974522 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-serviceca\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.974861 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-host\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.975414 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-serviceca\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:24 crc kubenswrapper[4660]: I0129 12:06:24.983281 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.004555 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.020536 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.035774 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.040475 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mv4\" (UniqueName: \"kubernetes.io/projected/9986bb09-c5a8-40f0-a89a-219fbeeaaef2-kube-api-access-t5mv4\") pod \"node-ca-n6ljb\" (UID: \"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\") " pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.050018 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.065362 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.078645 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.091706 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.108087 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.128849 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.145889 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.167598 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.173713 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-n6ljb" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.193327 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.207728 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: W0129 12:06:25.215132 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9986bb09_c5a8_40f0_a89a_219fbeeaaef2.slice/crio-035f5ec3c8abcd894a9868ad9ff95da1e7ff64f0cd09e876ad1062da5fd437d9 WatchSource:0}: Error finding container 035f5ec3c8abcd894a9868ad9ff95da1e7ff64f0cd09e876ad1062da5fd437d9: Status 404 returned error can't find the container with id 035f5ec3c8abcd894a9868ad9ff95da1e7ff64f0cd09e876ad1062da5fd437d9 Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.227626 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.245013 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.259555 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.279544 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.295945 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.416303 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:21:26.656411582 +0000 UTC Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.503215 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.530124 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.539792 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.546319 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.562781 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.577642 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.602395 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.619809 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.644521 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.671214 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.688610 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.712872 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.733383 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.736312 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.737899 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerStarted","Data":"f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.747700 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n6ljb" event={"ID":"9986bb09-c5a8-40f0-a89a-219fbeeaaef2","Type":"ContainerStarted","Data":"640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.748144 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-n6ljb" event={"ID":"9986bb09-c5a8-40f0-a89a-219fbeeaaef2","Type":"ContainerStarted","Data":"035f5ec3c8abcd894a9868ad9ff95da1e7ff64f0cd09e876ad1062da5fd437d9"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.750311 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.752917 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.752958 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.752971 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f"} Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.758785 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: E0129 12:06:25.761870 4660 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.778148 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.801809 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.827189 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.846413 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.879651 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.898358 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.913161 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.928448 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.951422 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.970161 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.983146 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:25 crc kubenswrapper[4660]: I0129 12:06:25.994831 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.015478 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.032085 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.044656 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.055887 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.070256 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.084156 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.084271 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.084300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.084345 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:30.084320341 +0000 UTC m=+27.307262473 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.084421 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.084485 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:30.084469055 +0000 UTC m=+27.307411247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.084494 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.084631 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:30.084602259 +0000 UTC m=+27.307544391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.183705 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.184886 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.184911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185032 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185049 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185047 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185059 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185071 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185084 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185121 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:30.185107804 +0000 UTC m=+27.408049936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.185134 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:30.185129495 +0000 UTC m=+27.408071627 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.419125 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:32:55.740788406 +0000 UTC Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.469545 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.469731 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.470180 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.470247 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.470296 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:26 crc kubenswrapper[4660]: E0129 12:06:26.470344 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.759462 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a"} Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.759535 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4"} Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.759548 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478"} Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.763475 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c" exitCode=0 Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.763549 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c"} Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.796731 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.818992 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.847151 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.868972 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.885773 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.901877 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.916359 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.937742 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.956297 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:26 crc kubenswrapper[4660]: I0129 12:06:26.980488 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:26Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.015813 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.037019 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.055728 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.070954 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.091747 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.419388 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:13:17.379351176 +0000 UTC Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.771978 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2" exitCode=0 Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.772045 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2"} Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.788721 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.803884 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.818451 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.848032 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.880049 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.903231 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.916173 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.931341 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.946516 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.963223 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.980049 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:27 crc kubenswrapper[4660]: I0129 12:06:27.991683 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:27Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.005614 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.020229 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.030626 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.421037 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:35:36.704792273 +0000 UTC Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.469382 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.469529 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.469890 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.469938 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.469986 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.470028 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.553214 4660 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.556173 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.556230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.556243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.556365 4660 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.565787 4660 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.566133 4660 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.567471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.567524 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.567537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.567556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.567570 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.582258 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.586685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.586736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.586744 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.586760 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.586769 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.599445 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.604295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.604347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.604358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.604377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.604395 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.620633 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.624785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.624844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.624857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.624875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.624891 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.638961 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.643235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.643277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.643290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.643304 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.643314 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.656216 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: E0129 12:06:28.656592 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.658177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.658221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.658232 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.658250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.658263 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.760387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.760453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.760465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.760479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.760488 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.777178 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884" exitCode=0 Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.777263 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.782413 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.801483 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.815583 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.829171 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.841238 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.854506 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.864083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.864127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.864161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.864176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.864186 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.868158 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.889269 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.904122 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.920342 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.933333 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.942757 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967101 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967478 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.967489 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:28Z","lastTransitionTime":"2026-01-29T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.980802 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:28 crc kubenswrapper[4660]: I0129 12:06:28.991816 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:28Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.002725 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.070049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.070089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.070098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.070113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.070122 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.173464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.173538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.173562 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.173596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.173619 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.276437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.276483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.276494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.276509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.276518 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.378822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.378870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.378881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.378897 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.378909 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.421885 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:15:16.170136692 +0000 UTC Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.480516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.480553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.480565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.480579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.480591 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.583298 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.583355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.583379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.583403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.583414 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.687017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.687080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.687095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.687118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.687135 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.789383 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.789432 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.789445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.789466 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.789479 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.791356 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerStarted","Data":"d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.891791 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.891853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.891870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.891892 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.891912 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.995539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.995595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.995609 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.995628 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:29 crc kubenswrapper[4660]: I0129 12:06:29.995645 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:29Z","lastTransitionTime":"2026-01-29T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.097576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.097616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.097625 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.097640 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.097651 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.131177 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.131516 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.131529 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.131506771 +0000 UTC m=+35.354448903 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.131561 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.131723 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.131724 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.131776 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.131765599 +0000 UTC m=+35.354707731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.131791 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.131784309 +0000 UTC m=+35.354726441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.200059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.200105 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.200116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.200131 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.200143 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.232108 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.232166 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232296 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232335 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232296 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232348 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232359 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232371 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232404 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.232387227 +0000 UTC m=+35.455329359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.232421 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.232414028 +0000 UTC m=+35.455356150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.301929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.301971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.301983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.301999 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.302009 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.404191 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.404233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.404249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.404266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.404279 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.422922 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:16:50.961563262 +0000 UTC Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.469607 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.470192 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.470447 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.470648 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.470792 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:30 crc kubenswrapper[4660]: E0129 12:06:30.470842 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.507544 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.507584 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.507594 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.507612 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.507622 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.610284 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.610320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.610328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.610342 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.610351 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.712303 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.712339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.712352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.712367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.712377 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.798515 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.798905 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.814138 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.814928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.814970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.814982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.815000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.815012 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.831154 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.833615 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.843022 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.858107 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.874820 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.889594 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.908910 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.917250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.917286 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.917296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.917311 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.917320 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:30Z","lastTransitionTime":"2026-01-29T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.933805 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.955512 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.969287 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.980389 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:30 crc kubenswrapper[4660]: I0129 12:06:30.995092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.010371 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.019784 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.019830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.019844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.019862 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.019909 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.024437 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.041422 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.056437 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.068970 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.087323 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.101606 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.113119 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.122449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.122485 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.122493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.122507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.122517 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.127540 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.149314 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.171009 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.188234 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.205032 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.221752 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.224155 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.224199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.224208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.224225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.224235 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.235595 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.249816 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.269563 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.281352 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.326324 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.326359 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.326366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.326380 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.326390 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.423238 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:24:47.188366498 +0000 UTC Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.428826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.428901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.428915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.428931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.428943 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.531416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.531450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.531458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.531471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.531481 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.633965 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.634003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.634024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.634039 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.634050 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.737000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.737041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.737052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.737069 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.737081 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.801352 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.801401 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.821823 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.833725 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.839771 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.839850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.839864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.839884 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.839897 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.848629 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.863098 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.881094 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.938577 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.942302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.942345 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.942357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.942372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.942383 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:31Z","lastTransitionTime":"2026-01-29T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.965857 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.984565 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:31 crc kubenswrapper[4660]: I0129 12:06:31.998662 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:31Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.012566 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.026787 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.040127 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.049910 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.054392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.054440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.054453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.054470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.054481 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.061844 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.073189 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.082927 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.157362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.157512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.157531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.157563 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.157582 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.261576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.261646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.261660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.261680 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.261696 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.364386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.364446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.364459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.364477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.364489 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.423768 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:26:30.204259183 +0000 UTC Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.467128 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.467175 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.467186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.467204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.467216 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.469382 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.469454 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.469506 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:32 crc kubenswrapper[4660]: E0129 12:06:32.469630 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:32 crc kubenswrapper[4660]: E0129 12:06:32.469813 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:32 crc kubenswrapper[4660]: E0129 12:06:32.469944 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.571070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.571363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.571373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.571394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.571406 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.674407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.674453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.674462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.674483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.674504 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.778163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.778212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.778223 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.778241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.778254 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.856298 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c" exitCode=0 Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.856420 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.876905 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.882406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.882433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.882443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.882460 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.882470 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.893810 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.906201 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.917900 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.939676 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.953906 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.969208 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.985366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.985417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.985429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.985453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.985523 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:32Z","lastTransitionTime":"2026-01-29T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:32 crc kubenswrapper[4660]: I0129 12:06:32.995122 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:32Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.013552 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.029955 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.048225 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.062106 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.077566 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.088827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.088924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.088944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.089001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.089016 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.094678 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.110359 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.193161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.193218 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.193233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.193547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.193583 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.297513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.297556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.297569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.297585 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.297596 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.400453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.400487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.400495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.400526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.400536 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.424092 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:11:33.336635016 +0000 UTC Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.488785 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.502295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.502331 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.502339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.502352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.502360 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.504002 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.517738 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.532678 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.550949 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.568325 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.590528 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.602338 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.604391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.604424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.604474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.604496 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.604508 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.624970 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.637929 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.649996 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.663879 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.676012 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.691248 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.707930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.707981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.707994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.708014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.708028 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.713833 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.810163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.810221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.810236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.810254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.810266 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.863825 4660 generic.go:334] "Generic (PLEG): container finished" podID="e956e367-0df8-44cc-b87e-a7ed32942593" containerID="ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906" exitCode=0 Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.863937 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerDied","Data":"ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.881444 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.899348 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.913224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.913267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.913279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.913298 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.913311 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:33Z","lastTransitionTime":"2026-01-29T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.916053 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.929280 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.951133 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.971665 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:33 crc kubenswrapper[4660]: I0129 12:06:33.988419 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:33Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.003092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017003 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017102 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.017428 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.031268 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.048070 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.062601 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.084608 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.107545 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.119754 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.119804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.119818 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.119836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.119848 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.126204 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.222806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.222849 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.222858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.222873 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.222882 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.325801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.325864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.325882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.325905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.325921 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.424400 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:22:50.895588391 +0000 UTC Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.428437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.428495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.428512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.428536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.428554 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.468922 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.468945 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:34 crc kubenswrapper[4660]: E0129 12:06:34.469082 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.469119 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:34 crc kubenswrapper[4660]: E0129 12:06:34.469240 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:34 crc kubenswrapper[4660]: E0129 12:06:34.469409 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.531819 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.531868 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.531887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.531907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.531919 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.634840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.634900 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.634913 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.634931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.634945 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.737325 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.737379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.737391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.737410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.737423 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.839425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.839478 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.839490 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.839508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.839521 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.870028 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" event={"ID":"e956e367-0df8-44cc-b87e-a7ed32942593","Type":"ContainerStarted","Data":"57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.895797 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.913278 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.927141 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.939287 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.942244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.942282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.942292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.942308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.942320 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:34Z","lastTransitionTime":"2026-01-29T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.950366 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.965470 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.980635 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:34 crc kubenswrapper[4660]: I0129 12:06:34.996253 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:34Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.013944 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.036193 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.046515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.046573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.046585 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.046610 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.046625 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.056870 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.083381 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.104678 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.116339 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.130295 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.149663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.149741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.149755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.149775 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.150055 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.252359 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.252409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.252421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.252439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.252452 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.355038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.355094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.355104 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.355118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.355128 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.425492 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:24:18.018999506 +0000 UTC Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.458032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.458287 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.458400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.459016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.459212 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.562254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.562621 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.562704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.562828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.562937 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.666410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.666463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.666476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.666499 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.666510 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.770874 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.770950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.770962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.770981 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.770994 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.787328 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv"] Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.787796 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.791657 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.793595 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.812985 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.827274 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.839029 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.861297 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.873168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.873263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.873278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.873299 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.873337 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.878446 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.891176 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.896430 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.896494 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.896513 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7sw\" (UniqueName: \"kubernetes.io/projected/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-kube-api-access-tg7sw\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.896535 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.907555 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.929512 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.944840 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.958369 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.971409 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.976593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.976626 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.976636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.976654 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.976666 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:35Z","lastTransitionTime":"2026-01-29T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.990503 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.997652 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7sw\" (UniqueName: \"kubernetes.io/projected/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-kube-api-access-tg7sw\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.997725 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.997764 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.997797 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:35 crc kubenswrapper[4660]: I0129 12:06:35.998442 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.000108 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.007467 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.008215 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.023805 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.025920 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7sw\" (UniqueName: \"kubernetes.io/projected/9fee30d5-4eb6-49c2-9c8b-457aca2103a5-kube-api-access-tg7sw\") pod \"ovnkube-control-plane-749d76644c-r9jbv\" (UID: \"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.038577 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.053824 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.079123 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.079176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.079188 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.079209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.079222 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.115537 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" Jan 29 12:06:36 crc kubenswrapper[4660]: W0129 12:06:36.131677 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fee30d5_4eb6_49c2_9c8b_457aca2103a5.slice/crio-0502bdb1bb01d1a169163ee6b7f74c37d6619b91dcd4148c13f01fa3217f84f6 WatchSource:0}: Error finding container 0502bdb1bb01d1a169163ee6b7f74c37d6619b91dcd4148c13f01fa3217f84f6: Status 404 returned error can't find the container with id 0502bdb1bb01d1a169163ee6b7f74c37d6619b91dcd4148c13f01fa3217f84f6 Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.182328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.182381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.182396 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.182417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.182435 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.284367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.284406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.284425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.284446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.284459 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.387537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.387611 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.387621 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.387637 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.387650 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.425742 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:57:54.759643562 +0000 UTC Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.469232 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.469295 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.469296 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:36 crc kubenswrapper[4660]: E0129 12:06:36.469408 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:36 crc kubenswrapper[4660]: E0129 12:06:36.469566 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:36 crc kubenswrapper[4660]: E0129 12:06:36.469650 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.490014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.490059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.490070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.490087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.490099 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.592578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.592626 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.592646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.592729 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.592751 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.696119 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.696165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.696178 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.696229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.696243 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.799219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.799271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.799285 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.799304 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.799318 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.889379 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/0.log" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.893467 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314" exitCode=1 Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.893598 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.894648 4660 scope.go:117] "RemoveContainer" containerID="7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.897823 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" event={"ID":"9fee30d5-4eb6-49c2-9c8b-457aca2103a5","Type":"ContainerStarted","Data":"70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.897885 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" event={"ID":"9fee30d5-4eb6-49c2-9c8b-457aca2103a5","Type":"ContainerStarted","Data":"f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.897898 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" event={"ID":"9fee30d5-4eb6-49c2-9c8b-457aca2103a5","Type":"ContainerStarted","Data":"0502bdb1bb01d1a169163ee6b7f74c37d6619b91dcd4148c13f01fa3217f84f6"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.910307 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.910365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.910376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.910394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.910405 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:36Z","lastTransitionTime":"2026-01-29T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.916602 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.919664 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kj5hd"] Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.920123 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:36 crc kubenswrapper[4660]: E0129 12:06:36.920180 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.931073 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.946731 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.966675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0129 12:06:36.118838 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.118901 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.119195 5831 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.119252 5831 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119289 5831 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119551 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120260 5831 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120595 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:36 crc kubenswrapper[4660]: I0129 12:06:36.991039 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:36Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.008024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwtx\" (UniqueName: \"kubernetes.io/projected/37236252-cd23-4e04-8cf2-28b59af3e179-kube-api-access-fpwtx\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.008163 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.012806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.012985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.013065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.013203 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.013304 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.019929 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.034946 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.052633 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.069470 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.084757 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.102457 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.108885 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.108972 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpwtx\" (UniqueName: \"kubernetes.io/projected/37236252-cd23-4e04-8cf2-28b59af3e179-kube-api-access-fpwtx\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: E0129 12:06:37.109125 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:37 crc kubenswrapper[4660]: E0129 12:06:37.109220 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:37.609196748 +0000 UTC m=+34.832138880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.116405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.116471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.116485 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.116507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.116529 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.118301 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.129895 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpwtx\" (UniqueName: \"kubernetes.io/projected/37236252-cd23-4e04-8cf2-28b59af3e179-kube-api-access-fpwtx\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.137163 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.150250 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.162613 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.173432 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.189133 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.208746 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.218982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.219039 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.219054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.219071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.219354 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.222844 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.236087 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.252451 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.264382 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.277915 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.288817 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.302677 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.313825 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.321989 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.322054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.322071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.322098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.322123 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.330539 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.340091 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.355187 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.367831 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.381728 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.398682 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.425546 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.425593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.425606 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.425626 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.425638 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.426201 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:17:42.292038219 +0000 UTC Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.434146 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0129 12:06:36.118838 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.118901 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.119195 5831 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.119252 5831 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119289 5831 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119551 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120260 5831 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120595 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.528161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.528201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.528211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.528225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.528237 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.614578 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:37 crc kubenswrapper[4660]: E0129 12:06:37.615206 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:37 crc kubenswrapper[4660]: E0129 12:06:37.615313 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:38.615287002 +0000 UTC m=+35.838229134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.630799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.630888 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.630901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.630936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.630952 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.733622 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.733658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.733666 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.733679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.733705 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.836745 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.836791 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.836800 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.836817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.836830 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.904978 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/0.log" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.909065 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.909562 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.927538 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.940168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.940330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.940343 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.940364 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.940393 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:37Z","lastTransitionTime":"2026-01-29T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.944100 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.955855 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.968504 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.984064 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:37 crc kubenswrapper[4660]: I0129 12:06:37.999647 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.014777 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.028421 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.043523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.043581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.043595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.043617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.043631 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.045744 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.059117 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.070805 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.083584 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.101812 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0129 12:06:36.118838 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.118901 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.119195 5831 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.119252 5831 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119289 5831 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119551 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120260 5831 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120595 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.114727 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.126419 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.146767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.146823 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.146834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.146852 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.146864 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.156516 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.175323 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.221976 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.222219 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:54.222176799 +0000 UTC m=+51.445118931 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.222406 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.222437 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.222614 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.222666 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:54.222658694 +0000 UTC m=+51.445600826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.222660 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.222791 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:54.222769447 +0000 UTC m=+51.445711579 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.249211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.249258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.249267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.249283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.249293 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.323627 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.323739 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323865 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323890 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323904 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323916 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323922 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323929 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.323989 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:54.323970553 +0000 UTC m=+51.546912705 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.324010 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:54.324001604 +0000 UTC m=+51.546943756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.351179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.351236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.351247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.351262 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.351275 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.427261 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:47:05.173218209 +0000 UTC Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.453828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.453872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.453883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.453903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.453921 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.469293 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.469321 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.469345 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.469408 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.469294 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.469510 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.469571 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.469643 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.556147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.556194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.556204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.556219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.556231 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.626411 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.626600 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.626677 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:40.626658923 +0000 UTC m=+37.849601055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.658482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.658573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.658609 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.658640 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.658661 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.761489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.761541 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.761553 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.761569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.761585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.837007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.837045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.837053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.837067 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.837076 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.857944 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.861574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.861619 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.861631 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.861651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.861663 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.879021 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.882029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.882054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.882062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.882075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.882084 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.894574 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.898570 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.898606 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.898615 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.898629 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.898638 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.909961 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.913862 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/1.log" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914037 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914128 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.914425 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/0.log" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.916606 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c" exitCode=1 Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.916643 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.916785 4660 scope.go:117] "RemoveContainer" containerID="7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.917722 4660 scope.go:117] "RemoveContainer" containerID="46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.918037 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.927930 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: E0129 12:06:38.928075 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.929751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.929793 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.929808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.929827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.929841 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:38Z","lastTransitionTime":"2026-01-29T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.934099 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.945716 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.958225 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.969621 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.983237 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:38 crc kubenswrapper[4660]: I0129 12:06:38.996612 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:38Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.019345 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0129 12:06:36.118838 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.118901 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.119195 5831 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.119252 5831 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119289 5831 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119551 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120260 5831 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120595 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.032008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.032229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.032323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.032406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.032468 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.035039 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.053526 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.067336 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.080076 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.095240 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.110205 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.123643 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.135513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.135556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.135568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.135583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.135595 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.136047 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.149354 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.162534 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.238381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.238434 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.238451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.238474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.238491 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.341345 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.341406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.341425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.341449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.341469 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.427860 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:55:31.115510922 +0000 UTC Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.447855 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.447906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.447917 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.447935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.447947 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.550322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.550357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.550368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.550384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.550395 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.652974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.653013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.653024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.653040 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.653052 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.748006 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.756092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.756336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.756403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.756479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.756541 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.792804 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.815414 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.833628 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.853927 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.859101 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.859149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.859161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.859180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.859192 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.869013 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.884882 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.902235 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.915959 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.921174 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/1.log" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.924360 4660 scope.go:117] "RemoveContainer" containerID="46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c" Jan 29 12:06:39 crc kubenswrapper[4660]: E0129 12:06:39.924633 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.932707 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.941591 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.953073 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.962124 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.962158 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.962168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.962182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.962191 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:39Z","lastTransitionTime":"2026-01-29T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.965970 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.974531 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:39 crc kubenswrapper[4660]: I0129 12:06:39.991275 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7493719fcd9fe3aeb7c5ad07dacb48e2255893683a21792319eba889f6c91314\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0129 12:06:36.118838 5831 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.118901 5831 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.119195 5831 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:06:36.119252 5831 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119289 5831 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:06:36.119551 5831 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120260 5831 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:06:36.120595 5831 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.001526 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.012378 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.023756 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.040487 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.055286 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.065635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.065685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.065717 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.065735 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.065750 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.075976 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.092392 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.119165 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.136679 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.153115 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.167123 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.168312 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.168348 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.168358 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.168373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.168382 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.180207 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.192930 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.212492 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.230088 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.245561 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.256614 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.270115 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.271130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.271225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.271288 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.271347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.271408 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.281994 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.292579 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:40Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.375202 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.375407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.375415 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.375428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.375440 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.428039 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:34:21.165581642 +0000 UTC Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.469607 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.469648 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.469640 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.469613 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.469795 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.469880 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.469944 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.470012 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.478125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.478311 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.478325 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.478344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.478357 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.581611 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.581667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.581678 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.581713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.581728 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.649721 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.649886 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:40 crc kubenswrapper[4660]: E0129 12:06:40.649954 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:44.649937054 +0000 UTC m=+41.872879186 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.685231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.685272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.685282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.685300 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.685310 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.788656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.788724 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.788734 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.788750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.788760 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.891581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.891623 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.891634 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.891647 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.891656 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.993990 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.994034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.994046 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.994063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:40 crc kubenswrapper[4660]: I0129 12:06:40.994074 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:40Z","lastTransitionTime":"2026-01-29T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.096505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.096565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.096576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.096598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.096613 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.198802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.198854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.198866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.198882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.198894 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.302330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.302370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.302379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.302393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.302404 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.404925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.404984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.404996 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.405017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.405032 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.428185 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:19:06.873253217 +0000 UTC Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.507889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.507945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.507961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.508013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.508030 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.611291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.611360 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.611375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.611416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.611431 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.713853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.713899 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.713911 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.713926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.713939 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.816498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.816949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.817058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.817197 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.817296 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.919474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.919525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.919537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.919555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:41 crc kubenswrapper[4660]: I0129 12:06:41.919566 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:41Z","lastTransitionTime":"2026-01-29T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.022004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.022040 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.022049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.022064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.022075 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.124974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.125010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.125018 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.125032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.125045 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.227440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.227472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.227481 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.227495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.227505 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.329595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.329661 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.329709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.329730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.329744 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.428557 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:37:47.507005704 +0000 UTC Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.432348 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.432489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.432517 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.432546 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.432569 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.468914 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.468967 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.468936 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:42 crc kubenswrapper[4660]: E0129 12:06:42.469074 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:42 crc kubenswrapper[4660]: E0129 12:06:42.469143 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:42 crc kubenswrapper[4660]: E0129 12:06:42.469210 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.469218 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:42 crc kubenswrapper[4660]: E0129 12:06:42.469438 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.536049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.536101 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.536113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.536129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.536141 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.638788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.638826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.638835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.638848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.638856 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.741829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.741903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.741920 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.741944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.741965 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.844761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.844803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.844812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.844826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.844835 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.947141 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.947385 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.947498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.947582 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:42 crc kubenswrapper[4660]: I0129 12:06:42.947665 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:42Z","lastTransitionTime":"2026-01-29T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.049983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.050012 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.050021 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.050034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.050043 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.153826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.153880 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.153890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.153904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.153913 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.256234 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.256277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.256289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.256302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.256313 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.358881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.358939 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.358951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.358971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.358990 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.429230 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:31:51.451731211 +0000 UTC Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.461737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.461785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.461796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.461816 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.461828 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.488834 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.499472 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.507984 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.525719 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.546924 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.560445 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.564026 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.564053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.564061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.564089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.564099 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.572843 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.590260 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.603728 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.613609 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.622208 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.630082 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.640167 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.657335 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.667052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.667121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.667133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.667151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.667162 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.668079 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.680339 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.690430 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.770181 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.770233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.770260 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.770277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.770289 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.873387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.873434 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.873445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.873461 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.873471 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.975388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.975443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.975462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.975480 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:43 crc kubenswrapper[4660]: I0129 12:06:43.975491 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:43Z","lastTransitionTime":"2026-01-29T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.385304 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.386025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.386063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.386088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.386102 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.429663 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:08:13.009167582 +0000 UTC Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.469225 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.469325 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.469332 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.469393 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.469481 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.469531 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.469583 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.469623 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.488635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.488668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.488677 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.488704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.488715 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.590810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.590848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.590857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.590871 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.590880 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.686996 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:44 crc kubenswrapper[4660]: E0129 12:06:44.687075 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:52.687057841 +0000 UTC m=+49.909999973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.687464 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.692973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.693002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.693011 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.693023 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.693032 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.794821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.794858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.794867 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.794881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.794890 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.896808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.897536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.897566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.897582 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.897593 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.999261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.999305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.999313 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.999329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:44 crc kubenswrapper[4660]: I0129 12:06:44.999338 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:44Z","lastTransitionTime":"2026-01-29T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.101742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.101786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.101798 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.101813 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.101825 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.204247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.204292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.204304 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.204320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.204331 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.307131 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.307181 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.307192 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.307208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.307218 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.409779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.409829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.409839 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.409857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.409870 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.429965 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:44:33.794311558 +0000 UTC Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.512598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.512633 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.512643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.512657 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.512667 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.615665 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.615738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.615776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.615798 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.615813 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.718039 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.718081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.718092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.718110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.718122 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.821393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.821517 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.821533 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.821573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.821596 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.924072 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.924279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.924345 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.924412 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:45 crc kubenswrapper[4660]: I0129 12:06:45.924479 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:45Z","lastTransitionTime":"2026-01-29T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.028109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.028170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.028185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.028207 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.028221 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.133147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.133204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.133216 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.133251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.133266 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.236333 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.236397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.236406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.236423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.236434 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.338967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.339006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.339015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.339033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.339043 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.430569 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:45:02.807942373 +0000 UTC Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.441863 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.441916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.441927 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.441948 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.441964 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.469440 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.469495 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.469524 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.469440 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:46 crc kubenswrapper[4660]: E0129 12:06:46.469760 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:46 crc kubenswrapper[4660]: E0129 12:06:46.469591 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:46 crc kubenswrapper[4660]: E0129 12:06:46.469943 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:46 crc kubenswrapper[4660]: E0129 12:06:46.469989 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.544461 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.544525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.544537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.544552 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.544563 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.647248 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.647311 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.647327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.647350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.647367 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.750220 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.750642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.750781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.750954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.751054 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.853853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.854132 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.854228 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.854355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.854682 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.958014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.958367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.958509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.958635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:46 crc kubenswrapper[4660]: I0129 12:06:46.958786 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:46Z","lastTransitionTime":"2026-01-29T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.062672 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.062755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.062768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.062815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.062830 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.166793 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.166840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.166850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.166869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.166883 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.270757 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.270802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.270811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.270829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.270840 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.373905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.373950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.373959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.373978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.373989 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.430732 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:29:58.462674426 +0000 UTC Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.476245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.476291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.476302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.476321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.476334 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.579372 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.579420 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.579430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.579443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.579468 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.681991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.682042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.682056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.682076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.682088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.784377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.784656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.784745 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.784845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.784915 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.887036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.887258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.887317 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.887405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.887466 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.990172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.990214 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.990225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.990239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:47 crc kubenswrapper[4660]: I0129 12:06:47.990250 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:47Z","lastTransitionTime":"2026-01-29T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.092143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.092193 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.092208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.092230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.092246 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.194487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.194540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.194552 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.194572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.194585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.297912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.297954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.297963 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.297979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.297988 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.400858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.401139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.401205 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.401269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.401335 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.431604 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:27:37.860549827 +0000 UTC Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.469060 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.469177 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.469060 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.469213 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.469070 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.469339 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.469421 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.469497 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.503823 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.503878 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.503889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.503907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.503926 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.607256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.607308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.607321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.607342 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.607356 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.709377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.709420 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.709431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.709447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.709459 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.812256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.812315 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.812327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.812352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.812365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.914914 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.914971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.914987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.915009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.915025 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.946741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.946788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.946799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.946815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.946830 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.963864 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.967827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.967903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.967914 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.967928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.967938 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.979440 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.983374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.983410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.983419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.983432 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:48 crc kubenswrapper[4660]: I0129 12:06:48.983443 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:48Z","lastTransitionTime":"2026-01-29T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:48 crc kubenswrapper[4660]: E0129 12:06:48.996262 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:48Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:48.999984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.000028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.000041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.000059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.000075 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: E0129 12:06:49.016767 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.023517 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.023559 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.023571 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.023590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.023601 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: E0129 12:06:49.048436 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:49 crc kubenswrapper[4660]: E0129 12:06:49.048598 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.050341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.050384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.050395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.050411 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.050423 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.152950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.153011 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.153027 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.153054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.153069 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.255256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.255301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.255310 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.255325 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.255335 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.358038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.358082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.358107 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.358121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.358153 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.433087 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:59:01.08103786 +0000 UTC Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.461903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.461958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.461970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.461989 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.462006 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.565515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.565581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.565606 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.565655 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.565679 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.668134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.668178 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.668192 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.668209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.668220 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.770740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.770772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.770782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.770801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.770812 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.873039 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.873085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.873097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.873113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.873123 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.975232 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.975531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.975613 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.975720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:49 crc kubenswrapper[4660]: I0129 12:06:49.975817 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:49Z","lastTransitionTime":"2026-01-29T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.079077 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.079122 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.079134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.079152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.079164 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.182016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.182063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.182075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.182094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.182109 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.284709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.284755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.284767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.284784 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.284795 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.387393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.387436 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.387449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.387465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.387479 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.434052 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:48:01.3738192 +0000 UTC Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.469730 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.469766 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:50 crc kubenswrapper[4660]: E0129 12:06:50.469868 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:50 crc kubenswrapper[4660]: E0129 12:06:50.470038 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.470211 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.470222 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:50 crc kubenswrapper[4660]: E0129 12:06:50.470591 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:50 crc kubenswrapper[4660]: E0129 12:06:50.470606 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.490348 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.490390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.490405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.490431 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.490446 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.592864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.592912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.592924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.592939 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.592952 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.631908 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.638775 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.647910 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.659294 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.669959 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.682763 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.694758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.694822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.694832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.694846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.694872 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.696809 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.710744 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.729331 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.748536 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.763605 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.774668 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.785663 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.797162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.797198 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.797206 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.797218 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.797227 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.798102 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.814018 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.827301 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.841167 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.854118 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.865203 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.899550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.899596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.899632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.899651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:50 crc kubenswrapper[4660]: I0129 12:06:50.899665 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:50Z","lastTransitionTime":"2026-01-29T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.002367 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.002424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.002435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.002451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.002462 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.104901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.104949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.104959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.104973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.104983 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.207493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.207544 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.207558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.207579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.207595 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.309846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.309905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.309914 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.309927 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.309937 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.412071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.412119 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.412130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.412149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.412162 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.434388 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:02:19.23859866 +0000 UTC Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.514410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.514443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.514453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.514471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.514488 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.616921 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.616960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.616971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.616987 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.616999 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.720014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.720068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.720084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.720123 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.720137 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.822975 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.823020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.823030 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.823048 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.823061 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.926829 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.926881 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.926896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.926916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:51 crc kubenswrapper[4660]: I0129 12:06:51.926929 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:51Z","lastTransitionTime":"2026-01-29T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.030659 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.030720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.030733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.030750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.030762 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.133788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.133830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.133840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.133856 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.133867 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.236306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.236357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.236366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.236384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.236395 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.339413 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.339462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.339475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.339493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.339507 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.435573 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:52:47.557934917 +0000 UTC Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.442509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.442552 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.442563 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.442576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.442586 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.469624 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.469703 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.469684 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.470082 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.470171 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.470288 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.470336 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.470427 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.470628 4660 scope.go:117] "RemoveContainer" containerID="46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.546275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.546606 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.546617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.546632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.546663 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.649465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.649508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.649519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.649538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.649552 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.751577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.751619 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.751628 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.751641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.751651 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.764259 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.764471 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:52 crc kubenswrapper[4660]: E0129 12:06:52.764583 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.764547254 +0000 UTC m=+65.987489466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.853605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.853641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.853651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.853664 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.853673 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.956038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.956063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.956071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.956084 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.956093 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:52Z","lastTransitionTime":"2026-01-29T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.970521 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/1.log" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.973764 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a"} Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.974591 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:06:52 crc kubenswrapper[4660]: I0129 12:06:52.996051 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.010928 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.029902 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.046786 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.059278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.059336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.059353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.059374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.059387 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.064622 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.078086 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.092151 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.106402 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.119252 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.133465 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.149752 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.166322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.166374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.166389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.166409 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.166424 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.168625 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.180405 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.196547 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.211185 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.227464 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.242986 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.262086 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.268962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.268995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.269007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.269028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.269039 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.372053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.372099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.372111 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.372127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.372138 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.436464 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:31:25.276190005 +0000 UTC Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.474115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.474169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.474184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.474202 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.474211 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.481264 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.500609 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.519980 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.535803 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.556108 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.567898 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.576093 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.576150 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.576162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.576183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.576197 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.579059 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.590476 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.608211 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.624102 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.638354 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.653329 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.668418 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.679125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.679163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.679174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.679192 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.679208 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.688288 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.703675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.723918 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.748825 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.763955 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.781185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.781221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.781231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.781246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.781257 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.883748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.884002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.884115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.884204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.884278 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.978980 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/2.log" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.979555 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/1.log" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.981805 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" exitCode=1 Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.981847 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a"} Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.981891 4660 scope.go:117] "RemoveContainer" containerID="46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.982677 4660 scope.go:117] "RemoveContainer" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" Jan 29 12:06:53 crc kubenswrapper[4660]: E0129 12:06:53.982931 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.986988 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.987029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.987042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.987062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:53 crc kubenswrapper[4660]: I0129 12:06:53.987074 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:53Z","lastTransitionTime":"2026-01-29T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.000614 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.011584 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.024016 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.040838 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.055417 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.069658 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.085226 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.090714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.090781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.090797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.090820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.090850 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.100506 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.122185 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.138443 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.150672 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.167388 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.182450 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.194058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.194099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.194109 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.194127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.194138 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.198642 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.214047 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.235290 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f525ecb8abd412731e1e5609cb31598ea0f97f07136bd1aa0000ad1be3a57c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:38Z\\\",\\\"message\\\":\\\"e:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:38.045396 6066 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 2.768584ms\\\\nI0129 12:06:38.045420 6066 services_controller.go:356] Processing sync for service openshift-ovn-kubernetes/ovn-kubernetes-node for network=default\\\\nF0129 12:06:38.045426 6066 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:37Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:06:38.045429 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.257217 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.278088 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.280320 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.280460 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.280513 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:26.280477722 +0000 UTC m=+83.503419894 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.280598 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.280624 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.280794 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:26.280757981 +0000 UTC m=+83.503700113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.280822 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.280906 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:26.280880495 +0000 UTC m=+83.503822617 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.296726 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.296785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.296796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.296817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.296830 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.381506 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.381576 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381734 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381757 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381770 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381823 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:26.381804982 +0000 UTC m=+83.604747284 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381734 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.381998 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.382032 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.382128 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:26.382103881 +0000 UTC m=+83.605046053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.400447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.400498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.400511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.400530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.400543 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.437726 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:21:30.7112196 +0000 UTC Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.469298 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.469368 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.469337 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.469504 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.469607 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.469780 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.470196 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.470568 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.502778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.502832 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.502844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.502864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.502877 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.605565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.605605 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.605617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.605635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.605646 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.708329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.708575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.708644 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.708740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.708826 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.812002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.812052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.812064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.812081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.812094 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.914840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.914895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.914908 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.914924 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.914935 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:54Z","lastTransitionTime":"2026-01-29T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.988621 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/2.log" Jan 29 12:06:54 crc kubenswrapper[4660]: I0129 12:06:54.992638 4660 scope.go:117] "RemoveContainer" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" Jan 29 12:06:54 crc kubenswrapper[4660]: E0129 12:06:54.992832 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.013595 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.018427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.018512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.018535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.018565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.018589 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.033985 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.054779 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.073798 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.087328 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.102602 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.118471 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.121479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.121547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.121560 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.121588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.121603 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.133653 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.149120 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.163478 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.179936 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.194903 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.212043 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.224807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.224848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.224860 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.224895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.224909 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.230226 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.247968 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.270149 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.293498 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.308325 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.327301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.327366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.327376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.327407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.327418 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.430272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.430319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.430331 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.430349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.430361 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.438734 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:35:26.842761578 +0000 UTC Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.532707 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.532748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.532761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.532779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.532791 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.635792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.635842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.635854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.635872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.635883 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.739239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.739305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.739314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.739330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.739343 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.841994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.842022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.842032 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.842047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.842058 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.944741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.944784 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.944796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.944820 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:55 crc kubenswrapper[4660]: I0129 12:06:55.944834 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:55Z","lastTransitionTime":"2026-01-29T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.048057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.048108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.048121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.048139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.048154 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.151136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.151252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.151276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.151341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.151562 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.253840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.253888 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.253900 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.253919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.253933 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.357861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.357935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.357954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.357978 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.357999 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.439730 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:01:31.130711737 +0000 UTC Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.461333 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.461384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.461398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.461419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.461433 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.469619 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.469665 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.469651 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.469629 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4660]: E0129 12:06:56.469843 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:56 crc kubenswrapper[4660]: E0129 12:06:56.469942 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:56 crc kubenswrapper[4660]: E0129 12:06:56.469997 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:56 crc kubenswrapper[4660]: E0129 12:06:56.470036 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.563854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.563902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.563918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.563938 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.563962 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.667042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.667107 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.667118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.667140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.667152 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.771104 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.771152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.771164 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.771185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.771197 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.874526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.874583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.874595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.874617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.874630 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.978338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.978402 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.978413 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.978438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:56 crc kubenswrapper[4660]: I0129 12:06:56.978919 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:56Z","lastTransitionTime":"2026-01-29T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.081645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.081683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.081718 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.081737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.081749 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.184005 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.184046 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.184056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.184074 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.184086 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.287195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.287250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.287261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.287276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.287287 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.390583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.390971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.391114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.391265 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.391452 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.440147 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:28:12.778382025 +0000 UTC Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.493703 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.493731 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.493740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.493755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.493764 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.596919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.596961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.596972 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.596993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.597005 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.699676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.699764 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.699777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.699797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.699811 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.802952 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.802995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.803007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.803024 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.803036 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.906040 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.906079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.906088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.906105 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:57 crc kubenswrapper[4660]: I0129 12:06:57.906115 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:57Z","lastTransitionTime":"2026-01-29T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.008642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.008709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.008723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.008738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.008751 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.112459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.112527 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.112545 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.112569 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.112591 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.215199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.215263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.215276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.215296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.215309 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.319108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.319162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.319172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.319194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.319208 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.422471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.422516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.422525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.422547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.422571 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.441117 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:08:47.731310877 +0000 UTC Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.469480 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.469557 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.469632 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.469508 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:58 crc kubenswrapper[4660]: E0129 12:06:58.469637 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:58 crc kubenswrapper[4660]: E0129 12:06:58.469835 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:58 crc kubenswrapper[4660]: E0129 12:06:58.469964 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:06:58 crc kubenswrapper[4660]: E0129 12:06:58.470074 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.526148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.526205 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.526221 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.526240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.526253 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.629191 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.629231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.629240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.629255 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.629265 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.731017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.731080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.731090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.731104 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.731113 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.834276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.834336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.834348 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.834369 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.834384 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.937755 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.937797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.937806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.937827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:58 crc kubenswrapper[4660]: I0129 12:06:58.937839 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:58Z","lastTransitionTime":"2026-01-29T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.040723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.040773 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.040782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.040796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.040806 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.143587 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.143656 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.143667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.143680 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.143717 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.246796 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.246893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.246915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.246950 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.246971 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.349097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.349159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.349178 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.349201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.349218 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.373588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.373645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.373663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.373724 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.373745 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.388378 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.391968 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.391999 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.392012 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.392030 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.392042 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.410865 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.414965 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.415017 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.415028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.415042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.415052 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.433340 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.437359 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.437408 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.437423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.437443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.437455 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.441322 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:10:46.859878869 +0000 UTC Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.451624 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.455215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.455245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.455287 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.455308 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.455318 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.465597 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:59 crc kubenswrapper[4660]: E0129 12:06:59.465746 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.466942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.466974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.466984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.467000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.467012 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.570518 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.570558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.570567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.570584 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.570593 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.673828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.673874 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.673885 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.673906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.673923 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.776710 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.776765 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.776777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.776797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.776813 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.879943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.879982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.879992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.880010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.880021 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.982893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.982933 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.982944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.982966 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:59 crc kubenswrapper[4660]: I0129 12:06:59.982981 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:59Z","lastTransitionTime":"2026-01-29T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.086435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.086523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.086535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.086558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.086571 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.189652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.189737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.189758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.189787 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.189806 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.292337 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.292394 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.292407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.292429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.292445 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.395663 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.395741 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.395762 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.395785 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.395799 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.441573 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:44:55.713060811 +0000 UTC Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.469827 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.469911 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.469868 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.469853 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:00 crc kubenswrapper[4660]: E0129 12:07:00.470055 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:00 crc kubenswrapper[4660]: E0129 12:07:00.470237 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:00 crc kubenswrapper[4660]: E0129 12:07:00.470476 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:00 crc kubenswrapper[4660]: E0129 12:07:00.470523 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.500453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.500497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.500510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.500534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.500549 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.603201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.603264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.603277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.603300 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.603315 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.706180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.706235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.706249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.706279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.706294 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.809603 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.810111 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.810128 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.810152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.810168 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.912848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.912995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.913009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.913030 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:00 crc kubenswrapper[4660]: I0129 12:07:00.913043 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:00Z","lastTransitionTime":"2026-01-29T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.015991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.016036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.016047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.016066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.016077 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.119417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.119469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.119480 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.119499 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.119512 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.222418 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.222479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.222492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.222515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.222530 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.326636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.326692 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.326704 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.326723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.326762 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.429758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.429834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.429846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.429870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.429884 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.442721 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:26:04.83968699 +0000 UTC Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.532375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.532425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.532437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.532454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.532466 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.636270 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.636330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.636341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.636365 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.636381 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.739564 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.739645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.739668 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.739736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.739760 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.843681 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.843788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.843807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.844346 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.844437 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.947929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.947985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.948000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.948029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:01 crc kubenswrapper[4660]: I0129 12:07:01.948043 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:01Z","lastTransitionTime":"2026-01-29T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.050591 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.050628 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.050640 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.050658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.050669 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.153121 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.153175 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.153204 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.153220 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.153231 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.255976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.256044 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.256056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.256075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.256088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.359558 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.359614 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.359636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.359665 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.359697 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.443955 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:07:13.009361951 +0000 UTC Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.463029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.463093 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.463110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.463134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.463150 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.468936 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.468996 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.469221 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.469241 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:02 crc kubenswrapper[4660]: E0129 12:07:02.469378 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:02 crc kubenswrapper[4660]: E0129 12:07:02.469510 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:02 crc kubenswrapper[4660]: E0129 12:07:02.469553 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:02 crc kubenswrapper[4660]: E0129 12:07:02.469612 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.567198 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.567259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.567277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.567305 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.567324 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.671590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.671676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.672016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.672542 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.672623 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.775723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.775826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.775850 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.775979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.776006 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.879045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.879097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.879120 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.879149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.879172 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.982902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.982965 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.982984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.983008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:02 crc kubenswrapper[4660]: I0129 12:07:02.983026 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:02Z","lastTransitionTime":"2026-01-29T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.086883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.087116 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.087155 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.087187 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.087214 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.190736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.191059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.191211 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.191327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.191453 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.294725 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.294768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.294782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.294827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.294843 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.397102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.397137 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.397146 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.397162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.397170 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.445195 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:49:48.795479102 +0000 UTC Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.495377 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.502183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.502233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.502247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.502279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.502296 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.521755 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.535919 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.551476 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.561467 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.574011 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.589044 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.603564 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.605238 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.605742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.605771 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.605800 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.605816 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.618683 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.629604 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.645922 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.656660 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.666734 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.684128 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.696903 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.708508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.708566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.708580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.708596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.708606 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.710655 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.723875 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.743453 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.810483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.810772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.810933 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.811031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.811120 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.913224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.913274 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.913288 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.913303 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:03 crc kubenswrapper[4660]: I0129 12:07:03.913312 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:03Z","lastTransitionTime":"2026-01-29T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.016015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.016051 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.016062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.016079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.016091 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.118654 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.118920 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.118999 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.119099 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.119183 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.221070 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.221100 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.221113 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.221129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.221139 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.323598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.323650 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.323669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.323697 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.323745 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.426008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.426052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.426064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.426081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.426092 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.446355 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:52:15.305022672 +0000 UTC Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.469021 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.469079 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:04 crc kubenswrapper[4660]: E0129 12:07:04.469162 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:04 crc kubenswrapper[4660]: E0129 12:07:04.469278 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.469405 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:04 crc kubenswrapper[4660]: E0129 12:07:04.469541 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.469901 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:04 crc kubenswrapper[4660]: E0129 12:07:04.469994 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.528915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.528976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.528992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.529016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.529036 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.633339 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.634245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.634545 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.634918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.635557 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.739913 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.739949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.739959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.739973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.739983 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.842547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.842577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.842585 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.842598 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.842607 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.944419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.944455 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.944463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.944476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:04 crc kubenswrapper[4660]: I0129 12:07:04.944485 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:04Z","lastTransitionTime":"2026-01-29T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.046888 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.046921 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.046932 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.046947 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.046956 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.149665 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.149951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.149963 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.149986 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.149998 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.253079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.253163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.253173 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.253194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.253208 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.357718 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.358130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.358232 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.358391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.358510 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.447220 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:22:27.546390571 +0000 UTC Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.462742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.463129 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.463246 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.463315 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.463392 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.566720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.566762 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.566779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.566800 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.566816 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.669028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.669088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.669100 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.669127 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.669147 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.771523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.771565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.771577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.771593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.771603 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.873747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.873809 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.873822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.873839 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.873853 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.976242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.976279 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.976295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.976315 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:05 crc kubenswrapper[4660]: I0129 12:07:05.976329 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:05Z","lastTransitionTime":"2026-01-29T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.078220 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.078249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.078257 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.078269 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.078278 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.181072 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.181126 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.181134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.181146 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.181156 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.283322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.283376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.283388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.283405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.283416 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.385930 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.385967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.386006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.386026 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.386036 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.448026 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:58:39.297464872 +0000 UTC Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.468921 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.468972 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.468938 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:06 crc kubenswrapper[4660]: E0129 12:07:06.469046 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.468920 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:06 crc kubenswrapper[4660]: E0129 12:07:06.469236 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:06 crc kubenswrapper[4660]: E0129 12:07:06.469592 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:06 crc kubenswrapper[4660]: E0129 12:07:06.469657 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.487633 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.487671 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.487681 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.487716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.487727 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.590377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.590537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.590556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.590580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.590599 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.693357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.693398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.693407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.693422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.693432 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.795777 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.796076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.796171 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.796266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.796365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.898853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.898895 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.898913 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.898935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:06 crc kubenswrapper[4660]: I0129 12:07:06.898950 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:06Z","lastTransitionTime":"2026-01-29T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.002195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.002223 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.002231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.002244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.002253 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.104397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.104426 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.104435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.104447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.104456 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.206869 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.206901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.206910 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.206926 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.206935 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.310160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.310240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.310258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.310736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.310805 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.413165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.413202 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.413213 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.413229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.413240 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.448176 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:38:30.229878344 +0000 UTC Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.469790 4660 scope.go:117] "RemoveContainer" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" Jan 29 12:07:07 crc kubenswrapper[4660]: E0129 12:07:07.470013 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.515225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.515264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.515273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.515291 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.515302 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.618492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.618814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.619037 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.619239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.619452 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.722306 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.722565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.722660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.722786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.722862 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.825206 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.825470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.825612 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.826092 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.826326 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.929007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.929049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.929060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.929076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:07 crc kubenswrapper[4660]: I0129 12:07:07.929088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:07Z","lastTransitionTime":"2026-01-29T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.031742 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.031787 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.031797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.031814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.031825 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.134050 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.134294 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.134386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.134475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.134573 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.236588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.236627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.236636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.236652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.236662 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.339010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.339051 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.339060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.339075 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.339084 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.440976 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.441007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.441015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.441028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.441037 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.448427 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:03:21.967059848 +0000 UTC Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.469869 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.469919 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.470049 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.469885 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.470143 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.470204 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.470995 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.471209 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.542825 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.542856 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.542865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.542879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.542888 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.645323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.645363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.645376 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.645393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.645404 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.749023 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.749087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.749106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.749133 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.749151 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.836934 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.837101 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:07:08 crc kubenswrapper[4660]: E0129 12:07:08.837212 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:40.83718857 +0000 UTC m=+98.060130732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.851730 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.851774 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.851786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.851806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.851818 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.954692 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.954751 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.954763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.954788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:08 crc kubenswrapper[4660]: I0129 12:07:08.954800 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:08Z","lastTransitionTime":"2026-01-29T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.056964 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.057341 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.057439 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.057532 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.057623 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.160005 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.160059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.160080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.160103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.160119 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.262287 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.262338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.262354 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.262375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.262390 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.364102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.364135 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.364143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.364158 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.364167 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.448745 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:13:12.908518421 +0000 UTC Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.466451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.466490 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.466501 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.466519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.466530 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.569521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.569562 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.569573 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.569589 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.569601 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.671682 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.671726 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.671734 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.671748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.671757 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.684538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.684623 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.684636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.684652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.684663 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.705215 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.709289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.709336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.709351 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.709368 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.709379 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.721335 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.724854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.725001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.725087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.725172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.725248 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.738502 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.744366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.744472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.744512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.744536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.744548 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.757902 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.761410 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.761437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.761447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.761464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.761475 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.774678 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:09 crc kubenswrapper[4660]: E0129 12:07:09.774858 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.776247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.776284 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.776296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.776314 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.776326 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.878575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.878616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.878629 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.878646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.878658 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.981066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.981112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.981125 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.981143 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:09 crc kubenswrapper[4660]: I0129 12:07:09.981155 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:09Z","lastTransitionTime":"2026-01-29T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.083601 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.083647 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.083657 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.083673 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.083685 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.185685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.185740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.185772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.185788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.185797 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.288615 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.288667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.288683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.288737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.288753 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.391148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.391186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.391195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.391210 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.391220 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.449601 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:52:33.418239988 +0000 UTC Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.469013 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:10 crc kubenswrapper[4660]: E0129 12:07:10.469110 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.469259 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:10 crc kubenswrapper[4660]: E0129 12:07:10.469307 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.469540 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:10 crc kubenswrapper[4660]: E0129 12:07:10.469616 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.469670 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:10 crc kubenswrapper[4660]: E0129 12:07:10.469772 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.493538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.493565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.493576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.493590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.493600 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.596580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.596616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.596627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.596643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.596654 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.698802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.698860 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.698874 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.698894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.698905 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.801797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.801870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.801883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.801902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.801917 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.904218 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.904261 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.904271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.904286 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:10 crc kubenswrapper[4660]: I0129 12:07:10.904294 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:10Z","lastTransitionTime":"2026-01-29T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.006548 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.006593 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.006603 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.006621 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.006634 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.108392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.108427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.108436 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.108453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.108464 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.242031 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.242064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.242074 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.242088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.242098 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.344447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.344782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.344896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.345033 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.345336 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.447906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.447944 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.447953 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.447966 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.447975 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.450523 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:52:11.708096086 +0000 UTC Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.550443 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.550499 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.550511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.550529 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.550540 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.652487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.652526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.652536 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.652552 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.652561 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.754678 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.754733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.754743 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.754757 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.754768 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.857842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.857894 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.857906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.857922 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.857932 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.960335 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.960369 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.960377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.960390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:11 crc kubenswrapper[4660]: I0129 12:07:11.960399 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:11Z","lastTransitionTime":"2026-01-29T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.063272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.063323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.063334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.063351 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.063364 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.165744 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.166106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.166218 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.166320 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.166410 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.268723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.268797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.268808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.268824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.268834 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.370636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.370676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.370687 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.370714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.370723 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.451543 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:39:08.243786068 +0000 UTC Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.468942 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.468942 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.468986 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:12 crc kubenswrapper[4660]: E0129 12:07:12.469385 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:12 crc kubenswrapper[4660]: E0129 12:07:12.469225 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:12 crc kubenswrapper[4660]: E0129 12:07:12.469474 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.468994 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:12 crc kubenswrapper[4660]: E0129 12:07:12.469571 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.472578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.472618 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.472630 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.472645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.472657 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.575493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.575546 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.575555 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.575572 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.575585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.678592 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.678637 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.678645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.678658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.678668 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.782013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.782054 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.782064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.782081 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.782093 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.884401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.884447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.884458 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.884476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.884488 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.986132 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.986167 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.986177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.986193 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:12 crc kubenswrapper[4660]: I0129 12:07:12.986204 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:12Z","lastTransitionTime":"2026-01-29T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.058212 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/0.log" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.058270 4660 generic.go:334] "Generic (PLEG): container finished" podID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" containerID="222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493" exitCode=1 Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.058304 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerDied","Data":"222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.058678 4660 scope.go:117] "RemoveContainer" containerID="222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.080383 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.094313 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.094513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.094528 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.094604 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.095350 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.095797 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.107664 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.120000 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.130713 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.140951 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.152180 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.167158 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.179528 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.190834 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.196902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.196946 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.196963 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.196984 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.197002 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.205155 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.215684 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.225217 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.239603 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.261834 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.275228 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.287675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.299575 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.299627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.299637 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.299651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.299660 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.300272 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.401533 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.401567 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.401579 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.401596 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.401608 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.452156 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:22:33.170032995 +0000 UTC Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.487310 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.502968 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.503808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.503837 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.503848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.503864 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.503874 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.515812 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.525225 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.536630 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.562043 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.579423 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.592211 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.602154 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.605384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.605414 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.605422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.605436 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.605444 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.612173 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.623750 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.634021 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.642674 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.653130 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.664428 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.675633 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.688540 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.705548 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:13Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.707182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.707208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.707218 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.707230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.707239 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.808854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.808879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.808905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.808918 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.808928 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.910519 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.910554 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.910563 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.910577 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:13 crc kubenswrapper[4660]: I0129 12:07:13.910585 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:13Z","lastTransitionTime":"2026-01-29T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.013026 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.013049 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.013057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.013071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.013086 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.062937 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/0.log" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.062984 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerStarted","Data":"799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.089141 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.103245 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.115738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.115786 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.115833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.115856 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.115907 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.120428 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.132812 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.149317 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.161151 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.171782 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.181469 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.192224 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.202322 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218326 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218366 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218382 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218412 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.218987 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.230950 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.245337 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.258179 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.271540 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.282451 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.294580 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.311365 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:14Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.320743 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.320902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.320991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.321089 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.321186 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.423471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.423804 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.423907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.424008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.424090 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.452968 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:18:27.887465643 +0000 UTC Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.469283 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.469729 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.469776 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.469831 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:14 crc kubenswrapper[4660]: E0129 12:07:14.469931 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:14 crc kubenswrapper[4660]: E0129 12:07:14.470325 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:14 crc kubenswrapper[4660]: E0129 12:07:14.470469 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:14 crc kubenswrapper[4660]: E0129 12:07:14.470577 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.482176 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.526430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.526471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.526483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.526500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.526513 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.628997 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.629250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.629330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.629420 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.629491 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.731535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.731680 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.731721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.731748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.731761 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.833936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.833973 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.833985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.834002 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.834014 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.936748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.936797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.936806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.936822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:14 crc kubenswrapper[4660]: I0129 12:07:14.936831 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:14Z","lastTransitionTime":"2026-01-29T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.039581 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.039610 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.039618 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.039630 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.039638 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.142392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.142419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.142426 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.142440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.142447 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.245215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.245256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.245266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.245278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.245287 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.347447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.347497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.347509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.347524 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.347534 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.449962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.450006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.450025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.450042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.450053 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.454099 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:29:21.793467117 +0000 UTC Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.552574 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.552613 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.552624 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.552640 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.552652 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.654764 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.655042 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.655112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.655183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.655259 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.758082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.758136 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.758147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.758167 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.758179 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.860297 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.860578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.860651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.860750 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.860822 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.963495 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.963542 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.963552 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.963568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:15 crc kubenswrapper[4660]: I0129 12:07:15.963580 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:15Z","lastTransitionTime":"2026-01-29T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.066430 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.066492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.066508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.066565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.066579 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.169544 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.169588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.169599 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.169616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.169628 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.271854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.271893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.271904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.271919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.271930 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.374808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.375060 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.375188 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.375293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.375381 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.455216 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:20:05.249208126 +0000 UTC Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.469248 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.469294 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:16 crc kubenswrapper[4660]: E0129 12:07:16.469579 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.469341 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:16 crc kubenswrapper[4660]: E0129 12:07:16.469715 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.469298 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:16 crc kubenswrapper[4660]: E0129 12:07:16.470089 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:16 crc kubenswrapper[4660]: E0129 12:07:16.469975 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.478123 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.478174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.478186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.478201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.478213 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.580833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.580867 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.580882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.580896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.580905 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.683228 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.683264 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.683275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.683289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.683298 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.786421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.786468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.786481 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.786500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.786512 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.889447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.889481 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.889492 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.889508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.889521 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.991395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.991662 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.991747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.991849 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:16 crc kubenswrapper[4660]: I0129 12:07:16.991913 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:16Z","lastTransitionTime":"2026-01-29T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.094148 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.094398 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.094459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.094521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.094606 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.197085 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.197146 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.197156 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.197176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.197187 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.299994 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.300282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.300424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.300566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.300717 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.403615 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.403746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.403771 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.403811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.403837 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.456172 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:44:31.710819164 +0000 UTC Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.506428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.506469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.506480 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.506494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.506504 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.609283 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.609576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.609732 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.609835 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.609940 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.712746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.712794 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.712807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.712825 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.712837 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.814436 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.814463 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.814472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.814486 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.814495 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.916901 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.916940 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.916949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.916964 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:17 crc kubenswrapper[4660]: I0129 12:07:17.916974 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:17Z","lastTransitionTime":"2026-01-29T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.019301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.019338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.019350 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.019364 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.019372 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.122209 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.122252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.122266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.122285 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.122297 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.224565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.224594 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.224603 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.224617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.224629 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.327776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.327815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.327822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.327837 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.327849 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.429650 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.429709 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.429717 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.429731 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.429739 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.457278 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:45:30.29739148 +0000 UTC Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.469572 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.469619 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.469713 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:18 crc kubenswrapper[4660]: E0129 12:07:18.469708 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.469753 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:18 crc kubenswrapper[4660]: E0129 12:07:18.469853 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:18 crc kubenswrapper[4660]: E0129 12:07:18.469889 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:18 crc kubenswrapper[4660]: E0129 12:07:18.469934 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.532627 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.532661 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.532670 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.532685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.532713 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.635009 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.635052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.635062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.635078 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.635090 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.738429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.738733 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.738814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.738902 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.738989 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.841214 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.841761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.841836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.841949 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.842023 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.944422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.944448 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.944457 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.944469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:18 crc kubenswrapper[4660]: I0129 12:07:18.944482 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:18Z","lastTransitionTime":"2026-01-29T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.047235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.047267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.047275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.047293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.047312 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.150118 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.150168 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.150179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.150197 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.150209 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.252385 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.252423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.252433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.252465 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.252476 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.371014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.371053 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.371064 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.371079 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.371088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.458249 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:21:58.331357437 +0000 UTC Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.474667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.474727 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.474740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.474754 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.474767 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.576857 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.576907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.576919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.576935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.576944 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.679047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.679086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.679097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.679112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.679123 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.781108 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.781157 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.781167 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.781183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.781193 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.883765 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.883876 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.883919 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.883936 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.883947 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.948198 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.948226 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.948236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.948271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.948280 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: E0129 12:07:19.968164 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.971646 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.971707 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.971720 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.971738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.971750 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:19 crc kubenswrapper[4660]: E0129 12:07:19.985901 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.989205 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.989231 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.989240 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.989256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:19 crc kubenswrapper[4660]: I0129 12:07:19.989267 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:19Z","lastTransitionTime":"2026-01-29T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.003234 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.006477 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.006505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.006513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.006525 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.006535 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.020073 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.023098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.023130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.023142 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.023157 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.023171 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.033758 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:20Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.033919 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.035595 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.035635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.035648 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.035666 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.035680 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.138145 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.138212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.138225 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.138242 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.138252 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.241165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.241244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.241256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.241278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.241292 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.344353 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.344425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.344453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.344488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.344501 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.447535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.447601 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.447624 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.447652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.447673 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.459238 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:25:41.118352115 +0000 UTC Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.469618 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.469642 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.469747 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.469888 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.469926 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.470040 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.470192 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:20 crc kubenswrapper[4660]: E0129 12:07:20.470274 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.550088 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.550139 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.550156 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.550199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.550218 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.653676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.654074 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.654309 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.654535 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.654983 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.756746 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.756778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.756788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.756802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.756811 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.859158 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.859215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.859230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.859254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.859269 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.963195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.963247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.963259 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.963277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:20 crc kubenswrapper[4660]: I0129 12:07:20.963290 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:20Z","lastTransitionTime":"2026-01-29T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.066034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.066071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.066080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.066095 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.066104 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.168652 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.168748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.168768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.168792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.168809 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.272082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.272147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.272161 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.272179 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.272192 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.374852 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.374904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.374915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.374934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.374947 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.459547 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:40:33.248558816 +0000 UTC Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.470607 4660 scope.go:117] "RemoveContainer" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.476576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.476636 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.476657 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.476685 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.476740 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.579322 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.579379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.579391 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.579407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.579418 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.681160 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.681190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.681199 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.681212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.681222 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.784360 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.784407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.784435 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.784456 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.784472 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.886373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.886451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.886468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.886488 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.886499 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.988945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.988995 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.989007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.989022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:21 crc kubenswrapper[4660]: I0129 12:07:21.989033 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:21Z","lastTransitionTime":"2026-01-29T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.088985 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/2.log" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.090739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.090781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.090792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.090808 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.090821 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.091894 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.092278 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.106244 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.119107 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.133172 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.144566 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.155383 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.165808 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.175861 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dacaca6-e959-42da-9640-d47978c2b4c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://675bac3555377a9cc0cafa3e0bdb76d7a025de4cda4a7aac3b160a66d16f8343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.187668 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.193315 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.193476 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.193566 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.193638 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.193714 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.198077 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.210083 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.222449 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.237766 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.249120 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.273980 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.296286 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.296326 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.296338 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.296355 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.296367 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.299758 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.314818 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.327671 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.347484 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.362892 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.398760 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.398792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.398802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.398815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.398826 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.460519 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:21:44.280322929 +0000 UTC Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.468989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.469094 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.469453 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:22 crc kubenswrapper[4660]: E0129 12:07:22.469607 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.469684 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:22 crc kubenswrapper[4660]: E0129 12:07:22.469901 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:22 crc kubenswrapper[4660]: E0129 12:07:22.470200 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:22 crc kubenswrapper[4660]: E0129 12:07:22.470246 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.501460 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.501505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.501516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.501531 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.501540 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.604047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.604074 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.604083 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.604097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.604108 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.706847 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.706893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.706904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.706922 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.706934 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.809418 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.809450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.809459 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.809474 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.809486 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.912257 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.912547 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.912635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.912748 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:22 crc kubenswrapper[4660]: I0129 12:07:22.912837 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:22Z","lastTransitionTime":"2026-01-29T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.015610 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.015645 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.015653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.015667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.015678 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.096084 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/3.log" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.096771 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/2.log" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.100213 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" exitCode=1 Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.100256 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.100302 4660 scope.go:117] "RemoveContainer" containerID="a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.102083 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:07:23 crc kubenswrapper[4660]: E0129 12:07:23.102315 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.118137 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.118509 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.118524 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.118539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.118551 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.119798 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.133675 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.148659 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.160586 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.173107 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.188770 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.214194 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:22Z\\\",\\\"message\\\":\\\"dler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 12:07:22.321123 6601 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 12:07:22.321151 6601 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 12:07:22.321206 6601 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 12:07:22.321253 6601 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:07:22.321285 6601 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:07:22.321350 6601 factory.go:656] Stopping watch factory\\\\nI0129 12:07:22.321381 6601 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:07:22.321399 6601 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:07:22.321421 6601 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:07:22.321354 6601 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 12:07:22.321433 6601 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 12:07:22.321380 6601 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 12:07:22.321378 6601 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 12:07:22.321508 6601 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:07:22.321603 6601 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.220776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.221027 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.221114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.221336 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.221411 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.227587 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.243211 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.261883 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.274399 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.291138 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.305047 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.320556 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.324740 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.324782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.324794 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.324811 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.324821 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.336332 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.346627 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.359188 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.369285 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dacaca6-e959-42da-9640-d47978c2b4c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://675bac3555377a9cc0cafa3e0bdb76d7a025de4cda4a7aac3b160a66d16f8343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.382571 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.427215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.427263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.427275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.427296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.427331 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.461141 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:43:16.101465775 +0000 UTC Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.502235 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.524586 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.529364 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.529405 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.529417 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.529540 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.529556 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.541134 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.562494 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.573376 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.584740 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.597058 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.608678 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dacaca6-e959-42da-9640-d47978c2b4c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://675bac3555377a9cc0cafa3e0bdb76d7a025de4cda4a7aac3b160a66d16f8343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.624092 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.631759 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.631797 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.631809 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.631824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.631835 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.638138 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.650109 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.664365 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.676808 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.686551 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.697067 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.713484 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a772fd17a5ee2a3305966aa65feedaea965a6a251a37789a312e474381d6399a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:06:53Z\\\",\\\"message\\\":\\\"GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225279 6213 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 12:06:53.225367 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:06:53.225444 6213 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:22Z\\\",\\\"message\\\":\\\"dler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 12:07:22.321123 6601 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 12:07:22.321151 6601 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 12:07:22.321206 6601 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 12:07:22.321253 6601 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:07:22.321285 6601 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:07:22.321350 6601 factory.go:656] Stopping watch factory\\\\nI0129 12:07:22.321381 6601 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:07:22.321399 6601 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:07:22.321421 6601 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:07:22.321354 6601 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 12:07:22.321433 6601 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 12:07:22.321380 6601 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 12:07:22.321378 6601 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 12:07:22.321508 6601 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:07:22.321603 6601 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:07:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.725530 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.733708 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.733749 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.733761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.733779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.733790 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.737680 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.752556 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.835266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.835303 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.835313 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.835329 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.835342 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.938493 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.938580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.938603 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.938632 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:23 crc kubenswrapper[4660]: I0129 12:07:23.938654 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:23Z","lastTransitionTime":"2026-01-29T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.042559 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.043537 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.043660 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.043772 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.043880 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.107364 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/3.log" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.114209 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:07:24 crc kubenswrapper[4660]: E0129 12:07:24.114450 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.134508 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e16ad49e-99d1-4d65-902b-8da95f14a98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f98b21f761fe7890010a5e017444ebafebde6fb12d65dccfed152c5f64ae0cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98c2d4055746db70ff6b8f67477ad16f262c5f597df3425b62b84b6ebf1d0f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1816291c99644a9dd7d7a17b9c0d1d57f3ba1c414fa546251a765f1ef7353ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f3561fbe3a3df7fef53a2d6f0eabf2c674523f86e62842c996a33ce13a2266b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897aecd7d0205811a8b322a1855638a71df717ed98076cd08ae5e9f25f29902b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://534b48f962f09a940dde6ec901a33d09e3f266efd159b4411f3a187a31ec70eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cfea2989799f3457cd1f7f8e5a4a3fa34607bfaaf33be1528a57d27794c2649\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94b41d28e326cd9720d889ffe4a2933f406c34c6ecba8150a10c5e4c3045a659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.146406 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.146464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.146482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.146505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.146524 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.151419 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kqctn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e956e367-0df8-44cc-b87e-a7ed32942593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57931c3b44dbdaf141c0872395b1f6202fffd6a744bc25feebc0d58e836f2293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158d91dff0ee37a4bf5b735746b41e331940f8ab7d126941caca63818b0d9a2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f177ec757b58155bb3ff4ec7f770b0847a3ef045a1c704f7239839bfbf86286c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddba20903ee044dcd7bf63ad13dfbeab8e6f8a039325565bc8340b97b34887e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd1f0ad734b4daa6e55b1c6f1e153bbff3b1f5ea53b0e8c7be152288cae884\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d99562e98cecdad58c5a74dca4436723616f329ac456d131a3fd209fe5c4c44c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebb6b640b2bcc5ba7e4b7d8a24ad1c15053f7be24fd767511324fe8f0dfc9906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kqctn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.165786 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.181486 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vb4nc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:12Z\\\",\\\"message\\\":\\\"2026-01-29T12:06:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd\\\\n2026-01-29T12:06:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_848c2615-4af1-4993-8e26-cf74ec1823cd to /host/opt/cni/bin/\\\\n2026-01-29T12:06:27Z [verbose] multus-daemon started\\\\n2026-01-29T12:06:27Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:07:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6n4dr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vb4nc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.198809 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6334701a424782614883f0a247b0584c95b15e7d090c7e5b4157a51401fe582c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6zkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdfz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.209123 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-n6ljb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9986bb09-c5a8-40f0-a89a-219fbeeaaef2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640aaa9175579ebf8e804717132eb1d60d48eb5f4395a371762f8df99be2a369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t5mv4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-n6ljb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.220324 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fee30d5-4eb6-49c2-9c8b-457aca2103a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f62e14a5583997c1d6eb0225cf2ceb8109f8e77604c622097a2c88292a0a9ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70915cae47bd54165bd23fbe68b502bc3a32a4963bf0e79c3861ceb250d84163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tg7sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-r9jbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.230265 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dacaca6-e959-42da-9640-d47978c2b4c7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://675bac3555377a9cc0cafa3e0bdb76d7a025de4cda4a7aac3b160a66d16f8343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2a1ad41b01dad8a365ccaae82460d697d2bd561cc9c6df637859f867e74f556\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.244782 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b3a80cd-1010-4777-867a-4f5b9e8c6a34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\" named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769688367\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769688367\\\\\\\\\\\\\\\" (2026-01-29 11:06:07 +0000 UTC to 2027-01-29 11:06:07 +0000 UTC (now=2026-01-29 12:06:22.186295541 +0000 UTC))\\\\\\\"\\\\nI0129 12:06:22.186335 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0129 12:06:22.186364 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0129 12:06:22.186459 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186522 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0129 12:06:22.186574 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186589 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0129 12:06:22.186623 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0129 12:06:22.186632 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0129 12:06:22.186635 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-74725469/tls.crt::/tmp/serving-cert-74725469/tls.key\\\\\\\"\\\\nI0129 12:06:22.186775 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0129 12:06:22.186787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0129 12:06:22.186854 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0129 12:06:22.187302 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.248266 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.248334 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.248357 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.248385 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.248407 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.257206 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb0950c-ee5d-4314-82ee-a70fb6bee4a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b44debb66405d56121495ad1c5313e043bf0b099de2f0a6ec4a8b304f9ffb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d5625a824cb6e94a49bd518802b34573aaa7b63f066d7c7136153921ee059e9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c71598242c0346dd2315ee23e3ec833fa0a74004e0b282db2639f226e2850ddf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.267473 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37236252-cd23-4e04-8cf2-28b59af3e179\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fpwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kj5hd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.279169 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18beea0bf6728df822c036f58324827b0ec547390d6400712a3e804fb4d4f8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.288176 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ad4af99ccc3ecc583ddd5e0dd35f5fa77da59bf79849be1ee17589f455cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.298507 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6c7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"485a51de-d434-4747-b63f-48c7486cefb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7586e69c1b4655ec11093b16b3cd80407d6ec6944ba7800d9455830a2b250c9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6c7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.327609 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.350589 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.350642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.350651 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.350667 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.350677 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.366356 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39de46a2-9cba-4331-aab2-697f0337563c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:07:22Z\\\",\\\"message\\\":\\\"dler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 12:07:22.321123 6601 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0129 12:07:22.321151 6601 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0129 12:07:22.321206 6601 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 12:07:22.321253 6601 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:07:22.321285 6601 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:07:22.321350 6601 factory.go:656] Stopping watch factory\\\\nI0129 12:07:22.321381 6601 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:07:22.321399 6601 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:07:22.321421 6601 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:07:22.321354 6601 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 12:07:22.321433 6601 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 12:07:22.321380 6601 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 12:07:22.321378 6601 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 12:07:22.321508 6601 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:07:22.321603 6601 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:07:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-267kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-clbcs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.378517 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07f98086-3aca-40a2-90d1-2c75b528a82e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cd5ea2f1041217d885552fe898a7b87a706c7588f60fd398e7c73ba73cfb9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef3573008a2c750cbb5bcc0762d2d2bb801f7a537212eafcb1519942e2440b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f741a954cd77a898b743569dacdea3c0a9f16f16151b916fd2c5a6217729e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://692595c7dc87b796b739e608b9387a259dc9b3cd5ef6c8280e0cd754692987ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:06:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:06:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.389381 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9f10e15c4681d23dabf73f34f55987993b73ce3634c7893200ac1b5029b2ce5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a458537ca3e5941379782ff358e9f899344e1c6b67b2a86486d82d061b1a0ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.399515 4660 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.453200 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.453235 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.453244 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.453258 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.453266 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.462292 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:45:32.203532495 +0000 UTC Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.469533 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.469562 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:24 crc kubenswrapper[4660]: E0129 12:07:24.469612 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.469637 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.469748 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:24 crc kubenswrapper[4660]: E0129 12:07:24.469793 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:24 crc kubenswrapper[4660]: E0129 12:07:24.469739 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:24 crc kubenswrapper[4660]: E0129 12:07:24.469841 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.555767 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.555810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.555822 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.555838 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.555849 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.657803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.657853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.657866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.657883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.657894 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.759403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.759523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.759539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.759556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.759570 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.861416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.861464 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.861479 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.861494 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.861505 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.964374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.964451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.964468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.964489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:24 crc kubenswrapper[4660]: I0129 12:07:24.964503 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:24Z","lastTransitionTime":"2026-01-29T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.068806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.068859 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.068870 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.068887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.068899 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.171539 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.171842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.171931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.172035 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.172186 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.274955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.275224 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.275290 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.275362 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.275422 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.378961 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.379015 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.379028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.379047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.379059 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.462758 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:47:37.976854179 +0000 UTC Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.481374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.481413 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.481424 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.481442 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.481453 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.584140 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.584185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.584195 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.584210 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.584221 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.687177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.687229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.687243 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.687263 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.687280 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.790397 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.790444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.790454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.790466 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.790475 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.892756 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.892819 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.892841 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.892865 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.892880 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.995423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.995482 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.995498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.995521 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:25 crc kubenswrapper[4660]: I0129 12:07:25.995538 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:25Z","lastTransitionTime":"2026-01-29T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.098526 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.098590 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.098612 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.098641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.098666 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.201352 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.201395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.201412 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.201429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.201440 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.303604 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.303648 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.303658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.303675 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.303705 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.308363 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.308473 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.308493 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.308631 4660 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.308712 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.308652725 +0000 UTC m=+147.531594897 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.308717 4660 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.308763 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.308748198 +0000 UTC m=+147.531690360 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.308875 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.308839821 +0000 UTC m=+147.531781993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.406451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.406516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.406530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.406550 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.406568 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.411000 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.411069 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411197 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411216 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411229 4660 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411280 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.411263386 +0000 UTC m=+147.634205528 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411524 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411543 4660 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411552 4660 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.411578 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.411569885 +0000 UTC m=+147.634512027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.463139 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:41:59.1181889 +0000 UTC Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.469485 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.469515 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.469762 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.469868 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.469886 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.469946 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.470117 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:26 crc kubenswrapper[4660]: E0129 12:07:26.470189 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.511176 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.511510 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.511890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.512041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.512181 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.615110 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.615151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.615164 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.615183 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.615197 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.717568 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.718045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.718162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.718272 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.718478 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.821794 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.821842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.821853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.821872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.821884 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.924700 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.924736 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.924744 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.924758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:26 crc kubenswrapper[4660]: I0129 12:07:26.924768 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:26Z","lastTransitionTime":"2026-01-29T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.027831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.027922 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.027943 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.027966 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.027980 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.130716 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.130757 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.130766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.130783 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.130794 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.233847 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.233915 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.233929 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.233958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.233973 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.336826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.336890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.336906 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.336954 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.336965 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.439319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.439400 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.439423 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.439451 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.439477 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.463856 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:09:47.133166188 +0000 UTC Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.541316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.541361 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.541374 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.541392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.541404 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.643711 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.643743 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.643752 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.643766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.643774 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.745679 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.745779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.745802 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.745828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.745850 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.848872 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.848916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.849047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.849068 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.849079 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.951446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.951487 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.951497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.951511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:27 crc kubenswrapper[4660]: I0129 12:07:27.951521 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:27Z","lastTransitionTime":"2026-01-29T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.054580 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.054626 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.054635 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.054653 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.054669 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.156834 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.156879 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.156890 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.156907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.156934 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.259931 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.259985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.260001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.260021 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.260036 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.362035 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.362078 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.362090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.362106 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.362116 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464410 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:23:40.660215435 +0000 UTC Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464853 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464878 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464888 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464904 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.464915 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.469655 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:28 crc kubenswrapper[4660]: E0129 12:07:28.469776 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.469925 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:28 crc kubenswrapper[4660]: E0129 12:07:28.469984 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.470088 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:28 crc kubenswrapper[4660]: E0129 12:07:28.470140 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.470247 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:28 crc kubenswrapper[4660]: E0129 12:07:28.470295 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.567404 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.567445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.567457 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.567475 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.567486 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.670813 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.670875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.670887 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.670905 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.670916 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.773523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.773640 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.773657 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.774008 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.774029 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.878112 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.878370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.878471 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.878616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.878753 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.981551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.981591 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.981601 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.981617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:28 crc kubenswrapper[4660]: I0129 12:07:28.981629 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:28Z","lastTransitionTime":"2026-01-29T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.089723 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.089801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.089815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.089836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.089857 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.192907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.193437 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.193643 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.193803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.193918 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.296137 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.296222 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.296241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.296323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.296341 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.398611 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.398985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.399141 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.399309 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.399587 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.465437 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:16:01.519416477 +0000 UTC Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.503149 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.503508 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.503790 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.504030 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.504193 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.607833 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.607959 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.607985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.608013 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.608034 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.710633 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.710666 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.710676 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.710703 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.710715 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.816884 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.817076 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.817174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.817245 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.817324 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.920173 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.920421 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.920428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.920444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:29 crc kubenswrapper[4660]: I0129 12:07:29.920453 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:29Z","lastTransitionTime":"2026-01-29T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.024022 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.024483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.024557 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.024626 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.024723 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.128889 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.129349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.129425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.129497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.129575 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.232799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.232844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.232855 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.232883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.232899 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.335925 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.335982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.335992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.336011 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.336027 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.368277 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.368330 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.368348 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.368373 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.368387 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.385482 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.391319 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.391363 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.391375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.391396 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.391410 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.406777 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.412102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.412151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.412163 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.412184 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.412197 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.427473 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.432098 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.432230 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.432293 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.432386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.432450 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.446727 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.450758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.450817 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.450828 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.450848 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.450862 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.464157 4660 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:07:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c98432e5-da5f-42fb-aa9a-8b9962bbfbea\\\",\\\"systemUUID\\\":\\\"84b5dbbc-d752-4d09-af33-1495e13b6eab\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:07:30Z is after 2025-08-24T17:21:41Z" Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.464327 4660 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.465824 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:44:09.211408361 +0000 UTC Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.466000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.466087 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.466194 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.466282 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.466365 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.469259 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.469333 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.469360 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.469432 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.469449 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.469576 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.469717 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:30 crc kubenswrapper[4660]: E0129 12:07:30.469742 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.568831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.568861 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.568871 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.568883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.568892 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.672177 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.672236 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.672248 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.672267 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.672279 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.774991 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.775062 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.775103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.775147 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.775169 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.877384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.877489 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.877523 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.877551 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.877573 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.980851 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.980951 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.980977 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.981010 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:30 crc kubenswrapper[4660]: I0129 12:07:30.981033 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:30Z","lastTransitionTime":"2026-01-29T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.083393 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.083426 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.083436 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.083448 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.083457 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.186036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.186086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.186097 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.186114 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.186125 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.290020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.290063 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.290071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.290086 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.290095 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.392250 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.392296 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.392307 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.392323 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.392335 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.466285 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:14:24.025245723 +0000 UTC Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.494001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.494038 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.494046 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.494059 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.494069 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.596713 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.596757 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.596765 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.596781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.596795 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.698923 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.698958 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.698967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.698979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.698991 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.801800 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.801837 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.801847 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.801863 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.801876 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.905425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.905498 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.905513 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.905538 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:31 crc kubenswrapper[4660]: I0129 12:07:31.905559 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:31Z","lastTransitionTime":"2026-01-29T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.008810 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.008875 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.008893 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.008916 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.008934 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.110935 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.111004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.111026 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.111056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.111070 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.212782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.212821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.212831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.212845 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.212854 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.315989 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.316037 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.316047 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.316061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.316072 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.420328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.420379 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.420388 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.420407 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.420423 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.467409 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:33:16.872979755 +0000 UTC Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.469761 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.469851 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.469899 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:32 crc kubenswrapper[4660]: E0129 12:07:32.469986 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:32 crc kubenswrapper[4660]: E0129 12:07:32.469908 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:32 crc kubenswrapper[4660]: E0129 12:07:32.470102 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.470132 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:32 crc kubenswrapper[4660]: E0129 12:07:32.470224 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.523389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.523433 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.523446 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.523467 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.523482 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.626543 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.626578 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.626588 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.626604 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.626614 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.728781 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.728816 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.728826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.728840 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.728851 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.830982 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.831044 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.831055 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.831071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.831083 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.935130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.935215 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.935228 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.935271 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:32 crc kubenswrapper[4660]: I0129 12:07:32.935291 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:32Z","lastTransitionTime":"2026-01-29T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.038276 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.038316 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.038327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.038344 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.038356 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.140370 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.140415 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.140426 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.140441 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.140450 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.243912 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.244212 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.244247 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.244273 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.244292 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.348188 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.348229 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.348239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.348256 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.348268 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.451739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.451807 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.451821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.451841 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.451855 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.468191 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:39:06.37087666 +0000 UTC Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.521968 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.521944738 podStartE2EDuration="1m11.521944738s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.502482055 +0000 UTC m=+90.725424187" watchObservedRunningTime="2026-01-29 12:07:33.521944738 +0000 UTC m=+90.744886870" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.537084 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.537052623 podStartE2EDuration="1m9.537052623s" podCreationTimestamp="2026-01-29 12:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.52302546 +0000 UTC m=+90.745967592" watchObservedRunningTime="2026-01-29 12:07:33.537052623 +0000 UTC m=+90.759994755" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.555909 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.556166 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.556254 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.556340 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.556456 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.560542 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vb4nc" podStartSLOduration=71.560518884 podStartE2EDuration="1m11.560518884s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.559559026 +0000 UTC m=+90.782501158" watchObservedRunningTime="2026-01-29 12:07:33.560518884 +0000 UTC m=+90.783461016" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.580401 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podStartSLOduration=71.580381929 podStartE2EDuration="1m11.580381929s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.580123751 +0000 UTC m=+90.803065883" watchObservedRunningTime="2026-01-29 12:07:33.580381929 +0000 UTC m=+90.803324061" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.608661 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-n6ljb" podStartSLOduration=71.608642631 podStartE2EDuration="1m11.608642631s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.596286347 +0000 UTC m=+90.819228479" watchObservedRunningTime="2026-01-29 12:07:33.608642631 +0000 UTC m=+90.831584763" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.628272 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-r9jbv" podStartSLOduration=70.628244468 podStartE2EDuration="1m10.628244468s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.609095334 +0000 UTC m=+90.832037466" watchObservedRunningTime="2026-01-29 12:07:33.628244468 +0000 UTC m=+90.851186600" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.646653 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.646638859 podStartE2EDuration="19.646638859s" podCreationTimestamp="2026-01-29 12:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.628959429 +0000 UTC m=+90.851901561" watchObservedRunningTime="2026-01-29 12:07:33.646638859 +0000 UTC m=+90.869580991" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.660056 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.660102 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.660115 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.660130 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.660140 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.691562 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6c7n9" podStartSLOduration=71.691535131 podStartE2EDuration="1m11.691535131s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.676292963 +0000 UTC m=+90.899235095" watchObservedRunningTime="2026-01-29 12:07:33.691535131 +0000 UTC m=+90.914477263" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.762616 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.762669 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.762683 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.762721 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.762734 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.782132 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.782105288 podStartE2EDuration="43.782105288s" podCreationTimestamp="2026-01-29 12:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.781880181 +0000 UTC m=+91.004822313" watchObservedRunningTime="2026-01-29 12:07:33.782105288 +0000 UTC m=+91.005047420" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.845999 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kqctn" podStartSLOduration=71.845976299 podStartE2EDuration="1m11.845976299s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.80594716 +0000 UTC m=+91.028889312" watchObservedRunningTime="2026-01-29 12:07:33.845976299 +0000 UTC m=+91.068918431" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.846174 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.846168204 podStartE2EDuration="1m8.846168204s" podCreationTimestamp="2026-01-29 12:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:33.837608102 +0000 UTC m=+91.060550244" watchObservedRunningTime="2026-01-29 12:07:33.846168204 +0000 UTC m=+91.069110336" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.873381 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.873979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.874061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.874165 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.874268 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.978292 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.978416 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.978427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.978445 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:33 crc kubenswrapper[4660]: I0129 12:07:33.978457 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:33Z","lastTransitionTime":"2026-01-29T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.081201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.081253 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.081268 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.081289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.081302 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.184444 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.184500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.184511 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.184530 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.184544 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.287392 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.287438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.287449 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.287472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.287494 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.390289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.390349 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.390401 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.390419 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.390723 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.468740 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:04:28.986196005 +0000 UTC Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.468939 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.468950 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.469000 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:34 crc kubenswrapper[4660]: E0129 12:07:34.469079 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.469193 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:34 crc kubenswrapper[4660]: E0129 12:07:34.469187 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:34 crc kubenswrapper[4660]: E0129 12:07:34.469319 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:34 crc kubenswrapper[4660]: E0129 12:07:34.469410 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.493983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.494061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.494073 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.494094 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.494106 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.596818 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.596854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.596866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.596883 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.596895 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.699387 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.699438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.699450 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.699469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.699481 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.802714 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.802779 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.802792 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.802812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.802825 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.906034 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.906071 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.906080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.906096 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:34 crc kubenswrapper[4660]: I0129 12:07:34.906106 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:34Z","lastTransitionTime":"2026-01-29T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.008888 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.008928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.008942 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.008962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.008977 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.111377 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.111432 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.111447 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.111468 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.111480 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.214738 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.214790 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.214801 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.214821 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.214837 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.318427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.318564 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.318576 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.318599 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.318642 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.420532 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.421151 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.421252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.421327 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.421404 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.468917 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:18:26.750922662 +0000 UTC Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.524004 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.524057 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.524065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.524082 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.524094 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.626429 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.626763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.626858 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.626934 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.626993 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.729351 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.729390 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.729404 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.729422 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.729436 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.832219 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.832289 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.832302 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.832318 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.832329 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.934882 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.934928 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.934955 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.934970 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:35 crc kubenswrapper[4660]: I0129 12:07:35.934980 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:35Z","lastTransitionTime":"2026-01-29T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.038172 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.038241 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.038251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.038301 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.038321 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.141787 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.141830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.141844 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.141862 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.141876 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.245565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.246014 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.246103 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.246180 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.246248 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.350462 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.350967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.351066 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.351182 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.351299 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.455145 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.455967 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.456045 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.456162 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.456251 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.469816 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.469865 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.469880 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.469912 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.469932 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:25:56.638066529 +0000 UTC Jan 29 12:07:36 crc kubenswrapper[4660]: E0129 12:07:36.470309 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:36 crc kubenswrapper[4660]: E0129 12:07:36.470446 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:36 crc kubenswrapper[4660]: E0129 12:07:36.470536 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:36 crc kubenswrapper[4660]: E0129 12:07:36.470619 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.559452 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.559500 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.559512 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.559532 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.559548 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.662903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.662993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.663007 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.663036 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.663053 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.766134 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.766190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.766201 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.766252 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.766275 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.869719 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.869766 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.869776 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.869795 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.869808 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.972389 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.972483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.972497 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.972516 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:36 crc kubenswrapper[4660]: I0129 12:07:36.972528 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:36Z","lastTransitionTime":"2026-01-29T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.075174 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.075233 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.075249 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.075642 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.075682 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.178440 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.178505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.178515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.178534 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.178544 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.281896 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.281962 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.281988 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.282016 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.282037 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.384907 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.384979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.384993 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.385020 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.385050 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.469869 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:07:37 crc kubenswrapper[4660]: E0129 12:07:37.470251 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.470445 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:12:02.826654099 +0000 UTC Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.487347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.487411 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.487442 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.487472 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.487493 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.590185 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.590239 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.590251 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.590268 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.590279 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.692990 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.693021 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.693029 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.693041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.693050 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.794964 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.795028 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.795041 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.795065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.795077 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.897617 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.897650 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.897658 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.897674 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:37 crc kubenswrapper[4660]: I0129 12:07:37.897683 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:37Z","lastTransitionTime":"2026-01-29T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.000006 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.000051 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.000061 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.000077 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.000088 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.102903 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.102945 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.102960 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.102979 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.102995 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.205384 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.205428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.205438 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.205454 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.205463 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.307758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.307788 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.307799 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.307813 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.307846 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.410090 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.410152 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.410170 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.410190 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.410205 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.469210 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:38 crc kubenswrapper[4660]: E0129 12:07:38.469446 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.469900 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:38 crc kubenswrapper[4660]: E0129 12:07:38.470081 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.470381 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.470506 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:34:39.473682401 +0000 UTC Jan 29 12:07:38 crc kubenswrapper[4660]: E0129 12:07:38.470520 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.470635 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:38 crc kubenswrapper[4660]: E0129 12:07:38.470894 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.512328 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.512386 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.512403 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.512425 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.512442 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.614846 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.614977 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.615000 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.615058 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.615073 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.717565 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.717602 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.717613 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.717629 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.717640 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.820470 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.820542 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.820556 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.820583 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.820610 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.922763 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.922814 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.922826 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.922842 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:38 crc kubenswrapper[4660]: I0129 12:07:38.922854 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:38Z","lastTransitionTime":"2026-01-29T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.025922 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.025974 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.025983 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.025997 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.026006 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.128300 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.128375 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.128395 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.128428 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.128452 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.235659 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.235758 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.235798 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.235827 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.235840 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.339208 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.339265 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.339275 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.339295 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.339310 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.442120 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.442159 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.442169 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.442186 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.442200 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.471228 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:04:47.283536571 +0000 UTC Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.544641 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.544739 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.544782 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.544803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.544815 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.647761 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.647812 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.647824 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.647843 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.647856 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.749985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.750052 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.750065 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.750080 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.750112 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.852737 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.852768 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.852778 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.852791 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.852801 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.954830 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.954971 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.954992 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.955012 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:39 crc kubenswrapper[4660]: I0129 12:07:39.955048 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:39Z","lastTransitionTime":"2026-01-29T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.057460 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.057505 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.057515 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.057528 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.057539 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.161321 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.161456 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.161520 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.161548 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.161564 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.263747 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.263806 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.263818 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.263836 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.263848 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.366311 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.366347 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.366360 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.366378 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.366390 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.468766 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.468838 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.468889 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.468935 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.468985 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.469001 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.469011 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.469025 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.468838 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.469038 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.469140 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.469067 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.469266 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.472374 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:09:27.30745766 +0000 UTC Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.571752 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.571815 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.571831 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.571854 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.571872 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.674427 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.674469 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.674483 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.674503 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.674519 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.777453 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.777507 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.777522 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.777546 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.777565 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.804803 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.804866 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.804884 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.804914 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.804933 4660 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:07:40Z","lastTransitionTime":"2026-01-29T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.861023 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh"] Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.861456 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.864065 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.864135 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.864313 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.865888 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.881386 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.881526 4660 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:07:40 crc kubenswrapper[4660]: E0129 12:07:40.881568 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs podName:37236252-cd23-4e04-8cf2-28b59af3e179 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:44.8815555 +0000 UTC m=+162.104497632 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs") pod "network-metrics-daemon-kj5hd" (UID: "37236252-cd23-4e04-8cf2-28b59af3e179") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.982068 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.982166 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6e8ae7d-b5d3-415f-8f33-91b158424443-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.982280 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.982437 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6e8ae7d-b5d3-415f-8f33-91b158424443-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:40 crc kubenswrapper[4660]: I0129 12:07:40.982590 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e8ae7d-b5d3-415f-8f33-91b158424443-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083533 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6e8ae7d-b5d3-415f-8f33-91b158424443-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083626 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6e8ae7d-b5d3-415f-8f33-91b158424443-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083654 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083741 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e8ae7d-b5d3-415f-8f33-91b158424443-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083767 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083823 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.083938 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f6e8ae7d-b5d3-415f-8f33-91b158424443-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.084499 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6e8ae7d-b5d3-415f-8f33-91b158424443-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.090381 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e8ae7d-b5d3-415f-8f33-91b158424443-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.102343 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6e8ae7d-b5d3-415f-8f33-91b158424443-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cm6vh\" (UID: \"f6e8ae7d-b5d3-415f-8f33-91b158424443\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.178781 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" Jan 29 12:07:41 crc kubenswrapper[4660]: W0129 12:07:41.203932 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e8ae7d_b5d3_415f_8f33_91b158424443.slice/crio-ea4a63919e411a804819595eb21da8b5651331c6280db2a2f976dffbe0fc8f10 WatchSource:0}: Error finding container ea4a63919e411a804819595eb21da8b5651331c6280db2a2f976dffbe0fc8f10: Status 404 returned error can't find the container with id ea4a63919e411a804819595eb21da8b5651331c6280db2a2f976dffbe0fc8f10 Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.472884 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:52:51.42962776 +0000 UTC Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.473446 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 12:07:41 crc kubenswrapper[4660]: I0129 12:07:41.481331 4660 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.171081 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" event={"ID":"f6e8ae7d-b5d3-415f-8f33-91b158424443","Type":"ContainerStarted","Data":"0ef8e40b87275a2fba287551a6fc58393de5b71d6edcd42797b5cd8915a68ef9"} Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.171156 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" event={"ID":"f6e8ae7d-b5d3-415f-8f33-91b158424443","Type":"ContainerStarted","Data":"ea4a63919e411a804819595eb21da8b5651331c6280db2a2f976dffbe0fc8f10"} Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.194413 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cm6vh" podStartSLOduration=80.194383373 podStartE2EDuration="1m20.194383373s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:42.191436196 +0000 UTC m=+99.414378328" watchObservedRunningTime="2026-01-29 12:07:42.194383373 +0000 UTC m=+99.417325505" Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.469005 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.469096 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.469152 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:42 crc kubenswrapper[4660]: I0129 12:07:42.469028 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:42 crc kubenswrapper[4660]: E0129 12:07:42.469213 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:42 crc kubenswrapper[4660]: E0129 12:07:42.469334 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:42 crc kubenswrapper[4660]: E0129 12:07:42.469451 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:42 crc kubenswrapper[4660]: E0129 12:07:42.469576 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:44 crc kubenswrapper[4660]: I0129 12:07:44.468995 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:44 crc kubenswrapper[4660]: E0129 12:07:44.469150 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:44 crc kubenswrapper[4660]: I0129 12:07:44.469169 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:44 crc kubenswrapper[4660]: E0129 12:07:44.469438 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:44 crc kubenswrapper[4660]: I0129 12:07:44.469489 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:44 crc kubenswrapper[4660]: I0129 12:07:44.469773 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:44 crc kubenswrapper[4660]: E0129 12:07:44.469911 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:44 crc kubenswrapper[4660]: E0129 12:07:44.470074 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:46 crc kubenswrapper[4660]: I0129 12:07:46.468842 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:46 crc kubenswrapper[4660]: E0129 12:07:46.468982 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:46 crc kubenswrapper[4660]: I0129 12:07:46.469176 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:46 crc kubenswrapper[4660]: E0129 12:07:46.469234 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:46 crc kubenswrapper[4660]: I0129 12:07:46.469365 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:46 crc kubenswrapper[4660]: E0129 12:07:46.469409 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:46 crc kubenswrapper[4660]: I0129 12:07:46.469525 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:46 crc kubenswrapper[4660]: E0129 12:07:46.469575 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:48 crc kubenswrapper[4660]: I0129 12:07:48.469144 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:48 crc kubenswrapper[4660]: I0129 12:07:48.469196 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:48 crc kubenswrapper[4660]: I0129 12:07:48.469190 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:48 crc kubenswrapper[4660]: I0129 12:07:48.469174 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:48 crc kubenswrapper[4660]: E0129 12:07:48.469359 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:48 crc kubenswrapper[4660]: E0129 12:07:48.469512 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:48 crc kubenswrapper[4660]: E0129 12:07:48.469608 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:48 crc kubenswrapper[4660]: E0129 12:07:48.469717 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:50 crc kubenswrapper[4660]: I0129 12:07:50.468953 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:50 crc kubenswrapper[4660]: I0129 12:07:50.469034 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:50 crc kubenswrapper[4660]: I0129 12:07:50.469037 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:50 crc kubenswrapper[4660]: E0129 12:07:50.469515 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:50 crc kubenswrapper[4660]: E0129 12:07:50.469626 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:50 crc kubenswrapper[4660]: E0129 12:07:50.469706 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:50 crc kubenswrapper[4660]: I0129 12:07:50.469807 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:07:50 crc kubenswrapper[4660]: E0129 12:07:50.469972 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:07:50 crc kubenswrapper[4660]: I0129 12:07:50.470043 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:50 crc kubenswrapper[4660]: E0129 12:07:50.470122 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:52 crc kubenswrapper[4660]: I0129 12:07:52.469077 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:52 crc kubenswrapper[4660]: E0129 12:07:52.469210 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:52 crc kubenswrapper[4660]: I0129 12:07:52.469814 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:52 crc kubenswrapper[4660]: I0129 12:07:52.470019 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:52 crc kubenswrapper[4660]: I0129 12:07:52.470113 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:52 crc kubenswrapper[4660]: E0129 12:07:52.470017 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:52 crc kubenswrapper[4660]: E0129 12:07:52.470167 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:52 crc kubenswrapper[4660]: E0129 12:07:52.470251 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:54 crc kubenswrapper[4660]: I0129 12:07:54.468904 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:54 crc kubenswrapper[4660]: I0129 12:07:54.468922 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:54 crc kubenswrapper[4660]: I0129 12:07:54.468900 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:54 crc kubenswrapper[4660]: I0129 12:07:54.469034 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:54 crc kubenswrapper[4660]: E0129 12:07:54.469151 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:54 crc kubenswrapper[4660]: E0129 12:07:54.469274 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:54 crc kubenswrapper[4660]: E0129 12:07:54.469380 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:54 crc kubenswrapper[4660]: E0129 12:07:54.469446 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:56 crc kubenswrapper[4660]: I0129 12:07:56.469750 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:56 crc kubenswrapper[4660]: I0129 12:07:56.469778 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:56 crc kubenswrapper[4660]: E0129 12:07:56.470598 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:56 crc kubenswrapper[4660]: I0129 12:07:56.469948 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:56 crc kubenswrapper[4660]: E0129 12:07:56.470653 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:56 crc kubenswrapper[4660]: I0129 12:07:56.469866 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:56 crc kubenswrapper[4660]: E0129 12:07:56.470807 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:56 crc kubenswrapper[4660]: E0129 12:07:56.470706 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:58 crc kubenswrapper[4660]: I0129 12:07:58.469037 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:07:58 crc kubenswrapper[4660]: I0129 12:07:58.469105 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:07:58 crc kubenswrapper[4660]: I0129 12:07:58.469163 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:07:58 crc kubenswrapper[4660]: E0129 12:07:58.469210 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:07:58 crc kubenswrapper[4660]: E0129 12:07:58.469330 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:07:58 crc kubenswrapper[4660]: E0129 12:07:58.469416 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:07:58 crc kubenswrapper[4660]: I0129 12:07:58.469822 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:58 crc kubenswrapper[4660]: E0129 12:07:58.469927 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.221599 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/1.log" Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.222002 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/0.log" Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.222037 4660 generic.go:334] "Generic (PLEG): container finished" podID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" containerID="799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f" exitCode=1 Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.222065 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerDied","Data":"799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f"} Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.222097 4660 scope.go:117] "RemoveContainer" containerID="222ddcc82ebda48fa1b1b67a2fdb44a92210ac15cc36604881842217ea792493" Jan 29 12:07:59 crc kubenswrapper[4660]: I0129 12:07:59.222421 4660 scope.go:117] "RemoveContainer" containerID="799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f" Jan 29 12:07:59 crc kubenswrapper[4660]: E0129 12:07:59.222545 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vb4nc_openshift-multus(f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3)\"" pod="openshift-multus/multus-vb4nc" podUID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" Jan 29 12:08:00 crc kubenswrapper[4660]: I0129 12:08:00.226933 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/1.log" Jan 29 12:08:00 crc kubenswrapper[4660]: I0129 12:08:00.469569 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:00 crc kubenswrapper[4660]: I0129 12:08:00.469633 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:00 crc kubenswrapper[4660]: E0129 12:08:00.469709 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:00 crc kubenswrapper[4660]: I0129 12:08:00.469569 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:00 crc kubenswrapper[4660]: E0129 12:08:00.469934 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:00 crc kubenswrapper[4660]: E0129 12:08:00.469985 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:00 crc kubenswrapper[4660]: I0129 12:08:00.470104 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:00 crc kubenswrapper[4660]: E0129 12:08:00.470287 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:01 crc kubenswrapper[4660]: I0129 12:08:01.471015 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:08:01 crc kubenswrapper[4660]: E0129 12:08:01.471330 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-clbcs_openshift-ovn-kubernetes(39de46a2-9cba-4331-aab2-697f0337563c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" Jan 29 12:08:02 crc kubenswrapper[4660]: I0129 12:08:02.469681 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:02 crc kubenswrapper[4660]: I0129 12:08:02.469733 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:02 crc kubenswrapper[4660]: I0129 12:08:02.469779 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:02 crc kubenswrapper[4660]: I0129 12:08:02.469791 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:02 crc kubenswrapper[4660]: E0129 12:08:02.469903 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:02 crc kubenswrapper[4660]: E0129 12:08:02.470130 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:02 crc kubenswrapper[4660]: E0129 12:08:02.470304 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:02 crc kubenswrapper[4660]: E0129 12:08:02.470455 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:03 crc kubenswrapper[4660]: E0129 12:08:03.486937 4660 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 12:08:03 crc kubenswrapper[4660]: E0129 12:08:03.555046 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:08:04 crc kubenswrapper[4660]: I0129 12:08:04.469758 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:04 crc kubenswrapper[4660]: I0129 12:08:04.469758 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:04 crc kubenswrapper[4660]: I0129 12:08:04.470134 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:04 crc kubenswrapper[4660]: I0129 12:08:04.469773 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:04 crc kubenswrapper[4660]: E0129 12:08:04.469900 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:04 crc kubenswrapper[4660]: E0129 12:08:04.470234 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:04 crc kubenswrapper[4660]: E0129 12:08:04.470136 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:04 crc kubenswrapper[4660]: E0129 12:08:04.470317 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:06 crc kubenswrapper[4660]: I0129 12:08:06.469604 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:06 crc kubenswrapper[4660]: I0129 12:08:06.469610 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:06 crc kubenswrapper[4660]: E0129 12:08:06.469780 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:06 crc kubenswrapper[4660]: E0129 12:08:06.469854 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:06 crc kubenswrapper[4660]: I0129 12:08:06.470435 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:06 crc kubenswrapper[4660]: E0129 12:08:06.470607 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:06 crc kubenswrapper[4660]: I0129 12:08:06.470740 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:06 crc kubenswrapper[4660]: E0129 12:08:06.470864 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:08 crc kubenswrapper[4660]: I0129 12:08:08.469530 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:08 crc kubenswrapper[4660]: I0129 12:08:08.469587 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:08 crc kubenswrapper[4660]: I0129 12:08:08.469555 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:08 crc kubenswrapper[4660]: E0129 12:08:08.469719 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:08 crc kubenswrapper[4660]: E0129 12:08:08.469818 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:08 crc kubenswrapper[4660]: I0129 12:08:08.469847 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:08 crc kubenswrapper[4660]: E0129 12:08:08.469935 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:08 crc kubenswrapper[4660]: E0129 12:08:08.470018 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:08 crc kubenswrapper[4660]: E0129 12:08:08.556632 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:08:10 crc kubenswrapper[4660]: I0129 12:08:10.469732 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:10 crc kubenswrapper[4660]: I0129 12:08:10.469743 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:10 crc kubenswrapper[4660]: I0129 12:08:10.469845 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:10 crc kubenswrapper[4660]: E0129 12:08:10.469879 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:10 crc kubenswrapper[4660]: I0129 12:08:10.469750 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:10 crc kubenswrapper[4660]: E0129 12:08:10.470000 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:10 crc kubenswrapper[4660]: E0129 12:08:10.470111 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:10 crc kubenswrapper[4660]: E0129 12:08:10.470210 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:12 crc kubenswrapper[4660]: I0129 12:08:12.469218 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:12 crc kubenswrapper[4660]: I0129 12:08:12.469257 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:12 crc kubenswrapper[4660]: I0129 12:08:12.469273 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:12 crc kubenswrapper[4660]: I0129 12:08:12.469246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:12 crc kubenswrapper[4660]: E0129 12:08:12.469365 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:12 crc kubenswrapper[4660]: E0129 12:08:12.469482 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:12 crc kubenswrapper[4660]: E0129 12:08:12.469522 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:12 crc kubenswrapper[4660]: E0129 12:08:12.469616 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:13 crc kubenswrapper[4660]: E0129 12:08:13.557157 4660 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:08:14 crc kubenswrapper[4660]: I0129 12:08:14.469629 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:14 crc kubenswrapper[4660]: I0129 12:08:14.469652 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:14 crc kubenswrapper[4660]: I0129 12:08:14.469772 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:14 crc kubenswrapper[4660]: E0129 12:08:14.469890 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:14 crc kubenswrapper[4660]: I0129 12:08:14.469980 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:14 crc kubenswrapper[4660]: E0129 12:08:14.470430 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:14 crc kubenswrapper[4660]: I0129 12:08:14.470526 4660 scope.go:117] "RemoveContainer" containerID="799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f" Jan 29 12:08:14 crc kubenswrapper[4660]: E0129 12:08:14.470601 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:14 crc kubenswrapper[4660]: E0129 12:08:14.470704 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:15 crc kubenswrapper[4660]: I0129 12:08:15.272910 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/1.log" Jan 29 12:08:15 crc kubenswrapper[4660]: I0129 12:08:15.272982 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerStarted","Data":"796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137"} Jan 29 12:08:15 crc kubenswrapper[4660]: I0129 12:08:15.470357 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.201119 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kj5hd"] Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.201520 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:16 crc kubenswrapper[4660]: E0129 12:08:16.201639 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.277852 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/3.log" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.280377 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerStarted","Data":"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac"} Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.280814 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.469637 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.469666 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:16 crc kubenswrapper[4660]: E0129 12:08:16.469760 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:16 crc kubenswrapper[4660]: E0129 12:08:16.469956 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:16 crc kubenswrapper[4660]: I0129 12:08:16.470189 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:16 crc kubenswrapper[4660]: E0129 12:08:16.470381 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:17 crc kubenswrapper[4660]: I0129 12:08:17.469120 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:17 crc kubenswrapper[4660]: E0129 12:08:17.469811 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kj5hd" podUID="37236252-cd23-4e04-8cf2-28b59af3e179" Jan 29 12:08:18 crc kubenswrapper[4660]: I0129 12:08:18.469354 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:18 crc kubenswrapper[4660]: I0129 12:08:18.469400 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:18 crc kubenswrapper[4660]: I0129 12:08:18.469430 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:18 crc kubenswrapper[4660]: E0129 12:08:18.469516 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:08:18 crc kubenswrapper[4660]: E0129 12:08:18.469614 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:08:18 crc kubenswrapper[4660]: E0129 12:08:18.469711 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:08:19 crc kubenswrapper[4660]: I0129 12:08:19.469232 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:19 crc kubenswrapper[4660]: I0129 12:08:19.471929 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 12:08:19 crc kubenswrapper[4660]: I0129 12:08:19.472322 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.469652 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.469743 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.469925 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.472420 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.473238 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.473244 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 12:08:20 crc kubenswrapper[4660]: I0129 12:08:20.474868 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.675278 4660 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.715117 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podStartSLOduration=119.715097309 podStartE2EDuration="1m59.715097309s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:16.311040468 +0000 UTC m=+133.533982620" watchObservedRunningTime="2026-01-29 12:08:21.715097309 +0000 UTC m=+138.938039431" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.716011 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7dvlp"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.716622 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.716985 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.717362 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.718077 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.718559 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.719307 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.719714 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.720570 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9zrm5"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.721172 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.728310 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.728386 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.728354 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.728855 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.729111 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.729179 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.731368 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h994k"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.731854 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.734955 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.735330 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.735508 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.735678 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.735837 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.735970 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.736101 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.736252 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.736406 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.736575 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.736722 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.737296 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.737456 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.737659 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.737798 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.737927 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.738190 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.738404 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.738868 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739041 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739177 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739246 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739245 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739176 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739551 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.739892 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.741475 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.741859 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.741998 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742122 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742252 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742390 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742521 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742681 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.742833 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.743566 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.743687 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.743832 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.743985 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.744125 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.744481 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.744624 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.744746 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w8qs9"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.745200 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.746081 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-grgf5"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.746405 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.751141 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.752615 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4bh4"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.753299 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.768033 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.773737 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.773821 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.785037 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.786380 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.786952 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.787382 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.787825 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.796431 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.796550 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.798514 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.800105 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.800467 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.816206 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.817559 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.817725 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92j9\" (UniqueName: \"kubernetes.io/projected/9992b62a-1c70-4213-b515-009b40aa326e-kube-api-access-m92j9\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.817962 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-images\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.817978 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-encryption-config\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818000 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818029 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-977jl\" (UniqueName: \"kubernetes.io/projected/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-kube-api-access-977jl\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818049 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818064 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-config\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818081 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818095 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-serving-cert\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818112 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818135 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818150 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vvzh\" (UniqueName: \"kubernetes.io/projected/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-kube-api-access-2vvzh\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818168 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-audit-policies\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9992b62a-1c70-4213-b515-009b40aa326e-audit-dir\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818204 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-etcd-client\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818066 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818311 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818191 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818459 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818225 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818265 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.818880 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819274 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819370 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819443 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819529 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819598 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819654 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.819669 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.820203 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.820271 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.820506 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.820774 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.824558 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ld5r8"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.826254 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.826288 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.826595 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.827117 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.827684 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7dvlp"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.827801 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.828339 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.829037 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.829319 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.829572 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.829997 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.830194 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.831935 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.840827 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.841036 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.841515 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.842114 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.842480 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.843001 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.845516 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.846055 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.848258 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.848503 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.848674 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.848926 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849119 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849236 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849338 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849432 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849590 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849720 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849766 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.849823 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.863957 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.865219 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.866026 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.867064 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.867191 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.867943 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.868369 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.870517 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.879196 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.881297 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.881439 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.881612 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.883060 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.883304 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.883433 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.884094 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.884437 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.884963 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.885372 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.885480 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.886056 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hbts9"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.886983 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.887857 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.891471 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.887979 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.891810 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mcjsl"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.892038 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.892140 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.892455 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.892662 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.892866 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.894985 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.896720 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.898175 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.898734 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.899265 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.899392 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.899991 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.902454 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4c2k4"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.903005 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.903366 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.903546 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.903738 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.904158 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.906555 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.907275 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.907785 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.908016 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.908247 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h994k"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.909420 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.910436 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.911651 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-knwfb"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.912326 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.912628 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.914848 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8j7l7"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.915705 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w8qs9"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.915775 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918782 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzfr\" (UniqueName: \"kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918831 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-serving-cert\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918851 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918868 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-audit-dir\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918890 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918908 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918923 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918950 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918965 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-encryption-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918981 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vvzh\" (UniqueName: \"kubernetes.io/projected/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-kube-api-access-2vvzh\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.918996 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919010 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919042 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cd802-1046-46c7-9168-0a281e7e92b2-metrics-tls\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919072 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919087 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ng6l\" (UniqueName: \"kubernetes.io/projected/5c4cd802-1046-46c7-9168-0a281e7e92b2-kube-api-access-2ng6l\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919108 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2457x\" (UniqueName: \"kubernetes.io/projected/6c466078-87ee-40ea-83ee-11aa309b065f-kube-api-access-2457x\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919158 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919208 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-audit-policies\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919228 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9992b62a-1c70-4213-b515-009b40aa326e-audit-dir\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919249 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-etcd-client\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919270 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919294 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92j9\" (UniqueName: \"kubernetes.io/projected/9992b62a-1c70-4213-b515-009b40aa326e-kube-api-access-m92j9\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919311 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919327 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919342 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-audit\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919356 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-client\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919370 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkff\" (UniqueName: \"kubernetes.io/projected/a749f255-61ea-4370-87fc-2d13276383b0-kube-api-access-6hkff\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919393 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919409 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5d163d7f-d49a-4487-9a62-a094182ac910-kube-api-access-kbhmz\") pod \"downloads-7954f5f757-grgf5\" (UID: \"5d163d7f-d49a-4487-9a62-a094182ac910\") " pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919425 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-auth-proxy-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919440 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919456 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a749f255-61ea-4370-87fc-2d13276383b0-serving-cert\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919473 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-images\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-encryption-config\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919510 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-serving-cert\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919527 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919543 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2bk\" (UniqueName: \"kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919558 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919573 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8nbd\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-kube-api-access-n8nbd\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919589 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919604 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919620 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919634 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-config\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919666 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-trusted-ca\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919681 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919727 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhplm\" (UniqueName: \"kubernetes.io/projected/394d4964-a60d-4ad5-90e7-44a7e36eae71-kube-api-access-rhplm\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919744 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-image-import-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919759 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919775 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-977jl\" (UniqueName: \"kubernetes.io/projected/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-kube-api-access-977jl\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919792 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919808 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919824 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d78c77cc-e724-4f09-b613-08983a1d2658-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919839 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394d4964-a60d-4ad5-90e7-44a7e36eae71-machine-approver-tls\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919855 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919872 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k66dr\" (UniqueName: \"kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919887 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-node-pullsecrets\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919902 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919919 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919935 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-config\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919950 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919966 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.919981 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d78c77cc-e724-4f09-b613-08983a1d2658-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.920911 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.922354 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-audit-policies\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.922431 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9992b62a-1c70-4213-b515-009b40aa326e-audit-dir\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.925647 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-images\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.926010 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.926392 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-config\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.926493 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9992b62a-1c70-4213-b515-009b40aa326e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.927011 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.937024 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-etcd-client\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.938415 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.940961 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-encryption-config\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.941026 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9zrm5"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.941537 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.945159 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.945869 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4bh4"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.949891 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.952497 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.953668 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.955642 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.960442 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.961133 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kjbsr"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.962203 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.962987 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wtvmf"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.963910 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.966220 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.967744 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ld5r8"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.978615 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.978739 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9992b62a-1c70-4213-b515-009b40aa326e-serving-cert\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.979965 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z"] Jan 29 12:08:21 crc kubenswrapper[4660]: I0129 12:08:21.998840 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.000222 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.001278 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-grgf5"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.005481 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.009037 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.014667 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.016613 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4c2k4"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.018275 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.019745 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020457 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020484 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020503 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxfp6\" (UniqueName: \"kubernetes.io/projected/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-kube-api-access-hxfp6\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020521 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-config\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020534 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-service-ca-bundle\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020562 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020576 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-trusted-ca\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020592 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020608 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d78c77cc-e724-4f09-b613-08983a1d2658-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020624 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394d4964-a60d-4ad5-90e7-44a7e36eae71-machine-approver-tls\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020641 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020657 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-serving-cert\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020671 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48991717-d2c1-427b-8b52-dc549b2e87d9-proxy-tls\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020724 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020743 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d78c77cc-e724-4f09-b613-08983a1d2658-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020760 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020776 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/d1456961-1af8-402a-9ebd-9cb419b85701-kube-api-access-59xm7\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020813 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020831 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mh7\" (UniqueName: \"kubernetes.io/projected/398de0cd-73e3-46c2-9ed1-4921e64fe50b-kube-api-access-d5mh7\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020847 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020883 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-socket-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020901 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5kq7\" (UniqueName: \"kubernetes.io/projected/170df6ed-3e79-4577-ba9f-20e4c075128c-kube-api-access-v5kq7\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020915 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020931 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-encryption-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020967 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dls\" (UniqueName: \"kubernetes.io/projected/4064e41f-ba1f-4b56-ac8b-2b50579d0953-kube-api-access-n2dls\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.020989 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021006 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-default-certificate\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021040 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8slz\" (UniqueName: \"kubernetes.io/projected/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-kube-api-access-c8slz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021061 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021077 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e853a192-370e-4deb-9668-671fd221b7fc-serving-cert\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021092 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3064f75-3f20-425e-94fc-9b2db0147c1d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021127 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ng6l\" (UniqueName: \"kubernetes.io/projected/5c4cd802-1046-46c7-9168-0a281e7e92b2-kube-api-access-2ng6l\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021143 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vsf\" (UniqueName: \"kubernetes.io/projected/1362cdda-d5a4-416f-8f65-6a631433d1ef-kube-api-access-h9vsf\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021158 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021173 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba274694-159c-4f63-9aff-54ba10d6f5ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021186 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-mountpoint-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021260 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021293 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhdb\" (UniqueName: \"kubernetes.io/projected/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-kube-api-access-lxhdb\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021329 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021348 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-cabundle\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021372 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2sb5\" (UniqueName: \"kubernetes.io/projected/7244f40b-2b72-48e2-bd02-5fdc718a460b-kube-api-access-n2sb5\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021404 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021423 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba274694-159c-4f63-9aff-54ba10d6f5ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021441 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5d163d7f-d49a-4487-9a62-a094182ac910-kube-api-access-kbhmz\") pod \"downloads-7954f5f757-grgf5\" (UID: \"5d163d7f-d49a-4487-9a62-a094182ac910\") " pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021459 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a749f255-61ea-4370-87fc-2d13276383b0-serving-cert\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021480 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wcs\" (UniqueName: \"kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021509 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ab0e310-7e1b-404d-b763-b6813d39d49d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021536 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2bk\" (UniqueName: \"kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021568 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-serving-cert\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021606 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gww6h\" (UniqueName: \"kubernetes.io/projected/486306e9-968b-4d73-932e-8f12efbf3204-kube-api-access-gww6h\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021630 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jgzj\" (UniqueName: \"kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021654 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021677 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021720 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-key\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021740 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e853a192-370e-4deb-9668-671fd221b7fc-config\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021757 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021772 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-csi-data-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021788 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3064f75-3f20-425e-94fc-9b2db0147c1d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021804 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-config\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021821 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021840 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhplm\" (UniqueName: \"kubernetes.io/projected/394d4964-a60d-4ad5-90e7-44a7e36eae71-kube-api-access-rhplm\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021857 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5zp\" (UniqueName: \"kubernetes.io/projected/5d92dd16-4a3a-42f4-9260-241ab774a2ea-kube-api-access-wv5zp\") pod \"migrator-59844c95c7-mfb8p\" (UID: \"5d92dd16-4a3a-42f4-9260-241ab774a2ea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021872 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-image-import-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021890 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021905 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqbs\" (UniqueName: \"kubernetes.io/projected/4ab0e310-7e1b-404d-b763-b6813d39d49d-kube-api-access-mxqbs\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021959 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021976 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.021995 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k66dr\" (UniqueName: \"kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.022011 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.022027 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.022041 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.022057 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-node-pullsecrets\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.022074 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024276 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-config\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024313 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-profile-collector-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024339 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024364 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0d62b-e9d2-4163-beb7-74d499965b65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024389 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-images\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024411 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-metrics-certs\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024438 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024467 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-audit-dir\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024510 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mzfr\" (UniqueName: \"kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024538 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.024581 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-config\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.023975 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a749f255-61ea-4370-87fc-2d13276383b0-trusted-ca\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.025207 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.025865 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.027408 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-serving-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.027431 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.027578 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.027921 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.028235 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d78c77cc-e724-4f09-b613-08983a1d2658-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.028481 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.029366 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-image-import-ca\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.029801 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.030407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-audit-dir\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.030456 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c466078-87ee-40ea-83ee-11aa309b065f-node-pullsecrets\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.030968 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.031438 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.033916 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.034803 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-encryption-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.034957 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035112 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035161 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035187 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035244 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035276 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035299 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035315 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp9bx\" (UniqueName: \"kubernetes.io/projected/48991717-d2c1-427b-8b52-dc549b2e87d9-kube-api-access-jp9bx\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035334 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhxn\" (UniqueName: \"kubernetes.io/projected/e853a192-370e-4deb-9668-671fd221b7fc-kube-api-access-gbhxn\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035344 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a749f255-61ea-4370-87fc-2d13276383b0-serving-cert\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035352 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-registration-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035407 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035428 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cd802-1046-46c7-9168-0a281e7e92b2-metrics-tls\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035449 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2457x\" (UniqueName: \"kubernetes.io/projected/6c466078-87ee-40ea-83ee-11aa309b065f-kube-api-access-2457x\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035465 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035502 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48991717-d2c1-427b-8b52-dc549b2e87d9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035521 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzq8\" (UniqueName: \"kubernetes.io/projected/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-kube-api-access-wnzq8\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035538 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-plugins-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035563 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035580 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-service-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035598 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-audit\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035616 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035635 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhrg\" (UniqueName: \"kubernetes.io/projected/4aa0d62b-e9d2-4163-beb7-74d499965b65-kube-api-access-nkhrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035651 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-stats-auth\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035666 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-client\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036340 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.035684 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hkff\" (UniqueName: \"kubernetes.io/projected/a749f255-61ea-4370-87fc-2d13276383b0-kube-api-access-6hkff\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036747 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036765 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-client\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036785 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-auth-proxy-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036803 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crvg\" (UniqueName: \"kubernetes.io/projected/ba274694-159c-4f63-9aff-54ba10d6f5ed-kube-api-access-9crvg\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036830 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036858 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036876 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa0d62b-e9d2-4163-beb7-74d499965b65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036891 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-config\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.036907 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ljd\" (UniqueName: \"kubernetes.io/projected/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-kube-api-access-x7ljd\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.037034 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.037068 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.037057 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038247 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038262 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8nbd\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-kube-api-access-n8nbd\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038286 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1362cdda-d5a4-416f-8f65-6a631433d1ef-proxy-tls\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038301 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038319 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3064f75-3f20-425e-94fc-9b2db0147c1d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038557 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/394d4964-a60d-4ad5-90e7-44a7e36eae71-machine-approver-tls\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-audit\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038806 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.038952 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-serving-cert\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.039113 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-config\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.039460 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.040116 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.040593 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.041103 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.041229 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.041407 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.041495 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/394d4964-a60d-4ad5-90e7-44a7e36eae71-auth-proxy-config\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.042042 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.042858 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c4cd802-1046-46c7-9168-0a281e7e92b2-metrics-tls\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.043881 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c466078-87ee-40ea-83ee-11aa309b065f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.043952 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.044403 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d78c77cc-e724-4f09-b613-08983a1d2658-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.046158 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.046170 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.046192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c466078-87ee-40ea-83ee-11aa309b065f-etcd-client\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.047671 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8j7l7"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.050072 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.051503 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mcjsl"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.054466 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.055681 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.058474 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.058762 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.062080 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.064195 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-knwfb"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.066095 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.067784 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.068846 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l6j2g"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.069461 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.069954 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.071732 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.072754 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l6j2g"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.074003 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wtvmf"] Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.078792 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.098379 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.119350 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.138571 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139785 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mh7\" (UniqueName: \"kubernetes.io/projected/398de0cd-73e3-46c2-9ed1-4921e64fe50b-kube-api-access-d5mh7\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139822 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139853 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-socket-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139876 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5kq7\" (UniqueName: \"kubernetes.io/projected/170df6ed-3e79-4577-ba9f-20e4c075128c-kube-api-access-v5kq7\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139900 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dls\" (UniqueName: \"kubernetes.io/projected/4064e41f-ba1f-4b56-ac8b-2b50579d0953-kube-api-access-n2dls\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139938 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139965 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-default-certificate\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.139993 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8slz\" (UniqueName: \"kubernetes.io/projected/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-kube-api-access-c8slz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140021 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e853a192-370e-4deb-9668-671fd221b7fc-serving-cert\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140044 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3064f75-3f20-425e-94fc-9b2db0147c1d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140084 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vsf\" (UniqueName: \"kubernetes.io/projected/1362cdda-d5a4-416f-8f65-6a631433d1ef-kube-api-access-h9vsf\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140107 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140127 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba274694-159c-4f63-9aff-54ba10d6f5ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140151 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-mountpoint-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140157 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-socket-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140175 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhdb\" (UniqueName: \"kubernetes.io/projected/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-kube-api-access-lxhdb\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140237 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-mountpoint-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140241 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-cabundle\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140329 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2sb5\" (UniqueName: \"kubernetes.io/projected/7244f40b-2b72-48e2-bd02-5fdc718a460b-kube-api-access-n2sb5\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140384 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba274694-159c-4f63-9aff-54ba10d6f5ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140440 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wcs\" (UniqueName: \"kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140464 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ab0e310-7e1b-404d-b763-b6813d39d49d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140528 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gww6h\" (UniqueName: \"kubernetes.io/projected/486306e9-968b-4d73-932e-8f12efbf3204-kube-api-access-gww6h\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140552 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jgzj\" (UniqueName: \"kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140573 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140631 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-key\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140666 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba274694-159c-4f63-9aff-54ba10d6f5ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140679 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e853a192-370e-4deb-9668-671fd221b7fc-config\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140722 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140772 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-csi-data-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140798 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3064f75-3f20-425e-94fc-9b2db0147c1d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140819 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-config\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140881 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5zp\" (UniqueName: \"kubernetes.io/projected/5d92dd16-4a3a-42f4-9260-241ab774a2ea-kube-api-access-wv5zp\") pod \"migrator-59844c95c7-mfb8p\" (UID: \"5d92dd16-4a3a-42f4-9260-241ab774a2ea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140886 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-csi-data-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.140961 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxqbs\" (UniqueName: \"kubernetes.io/projected/4ab0e310-7e1b-404d-b763-b6813d39d49d-kube-api-access-mxqbs\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141023 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141070 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141147 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-config\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141198 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-profile-collector-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141224 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141249 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0d62b-e9d2-4163-beb7-74d499965b65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141272 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-images\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141294 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-metrics-certs\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141319 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141350 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141371 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141392 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141416 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141436 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141449 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-config\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141472 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141506 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp9bx\" (UniqueName: \"kubernetes.io/projected/48991717-d2c1-427b-8b52-dc549b2e87d9-kube-api-access-jp9bx\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141529 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhxn\" (UniqueName: \"kubernetes.io/projected/e853a192-370e-4deb-9668-671fd221b7fc-kube-api-access-gbhxn\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141562 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-registration-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141596 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48991717-d2c1-427b-8b52-dc549b2e87d9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141622 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzq8\" (UniqueName: \"kubernetes.io/projected/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-kube-api-access-wnzq8\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141642 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-plugins-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141669 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-service-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141708 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhrg\" (UniqueName: \"kubernetes.io/projected/4aa0d62b-e9d2-4163-beb7-74d499965b65-kube-api-access-nkhrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141731 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-stats-auth\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141759 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141778 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-client\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141801 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crvg\" (UniqueName: \"kubernetes.io/projected/ba274694-159c-4f63-9aff-54ba10d6f5ed-kube-api-access-9crvg\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141835 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa0d62b-e9d2-4163-beb7-74d499965b65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141858 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-config\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141864 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3064f75-3f20-425e-94fc-9b2db0147c1d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141878 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ljd\" (UniqueName: \"kubernetes.io/projected/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-kube-api-access-x7ljd\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141900 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141928 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1362cdda-d5a4-416f-8f65-6a631433d1ef-proxy-tls\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141947 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141966 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3064f75-3f20-425e-94fc-9b2db0147c1d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141987 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142011 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxfp6\" (UniqueName: \"kubernetes.io/projected/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-kube-api-access-hxfp6\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142033 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-service-ca-bundle\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142077 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142106 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142128 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-serving-cert\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142150 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48991717-d2c1-427b-8b52-dc549b2e87d9-proxy-tls\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142174 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/d1456961-1af8-402a-9ebd-9cb419b85701-kube-api-access-59xm7\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142208 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.141672 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-config\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142341 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-service-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142494 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-registration-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.142825 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-ca\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.143113 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/48991717-d2c1-427b-8b52-dc549b2e87d9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.143207 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-plugins-dir\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.144021 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa0d62b-e9d2-4163-beb7-74d499965b65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.144059 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba274694-159c-4f63-9aff-54ba10d6f5ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.144148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0d62b-e9d2-4163-beb7-74d499965b65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.145041 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.146463 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-serving-cert\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.146564 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d1456961-1af8-402a-9ebd-9cb419b85701-etcd-client\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.146707 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4ab0e310-7e1b-404d-b763-b6813d39d49d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.146856 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3064f75-3f20-425e-94fc-9b2db0147c1d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.178609 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.198598 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.218903 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.226963 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.238362 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.244230 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-config\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.259231 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.278664 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.315045 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.318447 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.339315 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.359305 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.379682 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.399486 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.419036 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.426486 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.439176 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.459115 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.463411 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.478746 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.498907 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.519125 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.522526 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1362cdda-d5a4-416f-8f65-6a631433d1ef-images\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.542000 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.546624 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/48991717-d2c1-427b-8b52-dc549b2e87d9-proxy-tls\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.559394 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.579330 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.585948 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1362cdda-d5a4-416f-8f65-6a631433d1ef-proxy-tls\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.599076 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.618972 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.626503 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-metrics-certs\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.639188 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.658961 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.664087 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-default-certificate\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.680082 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.686718 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-stats-auth\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.698860 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.705842 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-service-ca-bundle\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.719601 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.739138 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.759405 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.778514 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.781785 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-cabundle\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.799884 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.818633 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.824620 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.824671 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-profile-collector-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.825908 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.839905 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.842105 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.859397 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.878061 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.897611 4660 request.go:700] Waited for 1.004430348s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.900061 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.907050 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.918295 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.949510 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.951932 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.959231 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.979074 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.984233 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4064e41f-ba1f-4b56-ac8b-2b50579d0953-signing-key\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:22 crc kubenswrapper[4660]: I0129 12:08:22.998968 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.019014 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.038706 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.059314 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.062920 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e853a192-370e-4deb-9668-671fd221b7fc-serving-cert\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.078173 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.099114 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.101796 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e853a192-370e-4deb-9668-671fd221b7fc-config\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.119213 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.138783 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.140418 4660 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.140489 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config podName:398de0cd-73e3-46c2-9ed1-4921e64fe50b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.640465058 +0000 UTC m=+140.863407190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config") pod "authentication-operator-69f744f599-knwfb" (UID: "398de0cd-73e3-46c2-9ed1-4921e64fe50b") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.140586 4660 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.140772 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert podName:398de0cd-73e3-46c2-9ed1-4921e64fe50b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.640747246 +0000 UTC m=+140.863689458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert") pod "authentication-operator-69f744f599-knwfb" (UID: "398de0cd-73e3-46c2-9ed1-4921e64fe50b") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141543 4660 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141603 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle podName:398de0cd-73e3-46c2-9ed1-4921e64fe50b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.641590791 +0000 UTC m=+140.864532973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle") pod "authentication-operator-69f744f599-knwfb" (UID: "398de0cd-73e3-46c2-9ed1-4921e64fe50b") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141629 4660 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141645 4660 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141656 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs podName:fc3dbe01-fdbc-4393-9a35-7f6244c4f385 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.641648502 +0000 UTC m=+140.864590634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs") pod "multus-admission-controller-857f4d67dd-4c2k4" (UID: "fc3dbe01-fdbc-4393-9a35-7f6244c4f385") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141735 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert podName:8d4807a1-3e06-4ec0-9e43-70d5c9755f61 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.641722554 +0000 UTC m=+140.864664716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert") pod "olm-operator-6b444d44fb-7c4dl" (UID: "8d4807a1-3e06-4ec0-9e43-70d5c9755f61") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.141901 4660 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.142029 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle podName:398de0cd-73e3-46c2-9ed1-4921e64fe50b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.642015833 +0000 UTC m=+140.864958015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle") pod "authentication-operator-69f744f599-knwfb" (UID: "398de0cd-73e3-46c2-9ed1-4921e64fe50b") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.142676 4660 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.142742 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls podName:7244f40b-2b72-48e2-bd02-5fdc718a460b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.642733064 +0000 UTC m=+140.865675196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-4s8m8" (UID: "7244f40b-2b72-48e2-bd02-5fdc718a460b") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.143939 4660 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.144097 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert podName:486306e9-968b-4d73-932e-8f12efbf3204 nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.644082964 +0000 UTC m=+140.867025166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert") pod "catalog-operator-68c6474976-dhkss" (UID: "486306e9-968b-4d73-932e-8f12efbf3204") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.143971 4660 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: E0129 12:08:23.144306 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert podName:170df6ed-3e79-4577-ba9f-20e4c075128c nodeName:}" failed. No retries permitted until 2026-01-29 12:08:23.64429465 +0000 UTC m=+140.867236872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-qqwgw" (UID: "170df6ed-3e79-4577-ba9f-20e4c075128c") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.157817 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.178771 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.199798 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.218670 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.239526 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.259008 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.279079 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.299141 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.319337 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.338794 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.359205 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.379179 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.399465 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.418963 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.438754 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.458453 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.487283 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.498426 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.518680 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.538736 4660 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.559095 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.593745 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vvzh\" (UniqueName: \"kubernetes.io/projected/1afa8f6d-9033-41f9-b30c-4ce3b4b56399-kube-api-access-2vvzh\") pod \"machine-api-operator-5694c8668f-7dvlp\" (UID: \"1afa8f6d-9033-41f9-b30c-4ce3b4b56399\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.611677 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92j9\" (UniqueName: \"kubernetes.io/projected/9992b62a-1c70-4213-b515-009b40aa326e-kube-api-access-m92j9\") pod \"apiserver-7bbb656c7d-bl69x\" (UID: \"9992b62a-1c70-4213-b515-009b40aa326e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.633245 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-977jl\" (UniqueName: \"kubernetes.io/projected/0cafd071-62fd-4601-a1ec-6ff85b52d0f5-kube-api-access-977jl\") pod \"openshift-apiserver-operator-796bbdcf4f-44dmm\" (UID: \"0cafd071-62fd-4601-a1ec-6ff85b52d0f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.659342 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665231 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665311 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665352 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665402 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665529 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665576 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665649 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.665718 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.666438 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-service-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.666723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-config\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.667241 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398de0cd-73e3-46c2-9ed1-4921e64fe50b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.669110 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/398de0cd-73e3-46c2-9ed1-4921e64fe50b-serving-cert\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.669211 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-srv-cert\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.669270 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.669624 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/486306e9-968b-4d73-932e-8f12efbf3204-srv-cert\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.669641 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7244f40b-2b72-48e2-bd02-5fdc718a460b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.670493 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/170df6ed-3e79-4577-ba9f-20e4c075128c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.679029 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.699124 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.719311 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.739582 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.759323 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.797515 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ng6l\" (UniqueName: \"kubernetes.io/projected/5c4cd802-1046-46c7-9168-0a281e7e92b2-kube-api-access-2ng6l\") pod \"dns-operator-744455d44c-v4bh4\" (UID: \"5c4cd802-1046-46c7-9168-0a281e7e92b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.817687 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhmz\" (UniqueName: \"kubernetes.io/projected/5d163d7f-d49a-4487-9a62-a094182ac910-kube-api-access-kbhmz\") pod \"downloads-7954f5f757-grgf5\" (UID: \"5d163d7f-d49a-4487-9a62-a094182ac910\") " pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.832195 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2bk\" (UniqueName: \"kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk\") pod \"oauth-openshift-558db77b4-h994k\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.834990 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.840743 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.854190 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k66dr\" (UniqueName: \"kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr\") pod \"controller-manager-879f6c89f-ccbzg\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.854436 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.869568 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.876523 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.894120 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.899269 4660 request.go:700] Waited for 1.868124939s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.899785 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhplm\" (UniqueName: \"kubernetes.io/projected/394d4964-a60d-4ad5-90e7-44a7e36eae71-kube-api-access-rhplm\") pod \"machine-approver-56656f9798-pc48d\" (UID: \"394d4964-a60d-4ad5-90e7-44a7e36eae71\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.922859 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mzfr\" (UniqueName: \"kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr\") pod \"route-controller-manager-6576b87f9c-wc4dc\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.955596 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2457x\" (UniqueName: \"kubernetes.io/projected/6c466078-87ee-40ea-83ee-11aa309b065f-kube-api-access-2457x\") pod \"apiserver-76f77b778f-9zrm5\" (UID: \"6c466078-87ee-40ea-83ee-11aa309b065f\") " pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.972339 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.973723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8nbd\" (UniqueName: \"kubernetes.io/projected/d78c77cc-e724-4f09-b613-08983a1d2658-kube-api-access-n8nbd\") pod \"cluster-image-registry-operator-dc59b4c8b-c5c4w\" (UID: \"d78c77cc-e724-4f09-b613-08983a1d2658\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.985331 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.994357 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hkff\" (UniqueName: \"kubernetes.io/projected/a749f255-61ea-4370-87fc-2d13276383b0-kube-api-access-6hkff\") pod \"console-operator-58897d9998-w8qs9\" (UID: \"a749f255-61ea-4370-87fc-2d13276383b0\") " pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:23 crc kubenswrapper[4660]: I0129 12:08:23.994599 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.002733 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.007944 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.022752 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.031399 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.031813 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.040428 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.088383 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mh7\" (UniqueName: \"kubernetes.io/projected/398de0cd-73e3-46c2-9ed1-4921e64fe50b-kube-api-access-d5mh7\") pod \"authentication-operator-69f744f599-knwfb\" (UID: \"398de0cd-73e3-46c2-9ed1-4921e64fe50b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.094660 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5kq7\" (UniqueName: \"kubernetes.io/projected/170df6ed-3e79-4577-ba9f-20e4c075128c-kube-api-access-v5kq7\") pod \"package-server-manager-789f6589d5-qqwgw\" (UID: \"170df6ed-3e79-4577-ba9f-20e4c075128c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.113703 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dls\" (UniqueName: \"kubernetes.io/projected/4064e41f-ba1f-4b56-ac8b-2b50579d0953-kube-api-access-n2dls\") pod \"service-ca-9c57cc56f-mcjsl\" (UID: \"4064e41f-ba1f-4b56-ac8b-2b50579d0953\") " pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.135267 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8slz\" (UniqueName: \"kubernetes.io/projected/08bfd5dc-3708-4cc8-acf8-8fa2a91aef54-kube-api-access-c8slz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xvk4l\" (UID: \"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.170252 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vsf\" (UniqueName: \"kubernetes.io/projected/1362cdda-d5a4-416f-8f65-6a631433d1ef-kube-api-access-h9vsf\") pod \"machine-config-operator-74547568cd-lst8h\" (UID: \"1362cdda-d5a4-416f-8f65-6a631433d1ef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.174430 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.177478 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.188907 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhdb\" (UniqueName: \"kubernetes.io/projected/8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd-kube-api-access-lxhdb\") pod \"router-default-5444994796-hbts9\" (UID: \"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd\") " pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.191010 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.194033 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2sb5\" (UniqueName: \"kubernetes.io/projected/7244f40b-2b72-48e2-bd02-5fdc718a460b-kube-api-access-n2sb5\") pod \"control-plane-machine-set-operator-78cbb6b69f-4s8m8\" (UID: \"7244f40b-2b72-48e2-bd02-5fdc718a460b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.199231 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.220083 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.222294 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wcs\" (UniqueName: \"kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs\") pod \"collect-profiles-29494800-96nss\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.223109 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.237865 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gww6h\" (UniqueName: \"kubernetes.io/projected/486306e9-968b-4d73-932e-8f12efbf3204-kube-api-access-gww6h\") pod \"catalog-operator-68c6474976-dhkss\" (UID: \"486306e9-968b-4d73-932e-8f12efbf3204\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.253000 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.255302 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jgzj\" (UniqueName: \"kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj\") pod \"marketplace-operator-79b997595-grt4k\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.269827 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.281780 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e818a9e1-a6ae-4d43-aca2-d9cad02e57ba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s6zhh\" (UID: \"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.282052 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.300941 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3064f75-3f20-425e-94fc-9b2db0147c1d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-54d55\" (UID: \"b3064f75-3f20-425e-94fc-9b2db0147c1d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.307036 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.317225 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5zp\" (UniqueName: \"kubernetes.io/projected/5d92dd16-4a3a-42f4-9260-241ab774a2ea-kube-api-access-wv5zp\") pod \"migrator-59844c95c7-mfb8p\" (UID: \"5d92dd16-4a3a-42f4-9260-241ab774a2ea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.337572 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxqbs\" (UniqueName: \"kubernetes.io/projected/4ab0e310-7e1b-404d-b763-b6813d39d49d-kube-api-access-mxqbs\") pod \"cluster-samples-operator-665b6dd947-6jrtx\" (UID: \"4ab0e310-7e1b-404d-b763-b6813d39d49d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.342158 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" event={"ID":"394d4964-a60d-4ad5-90e7-44a7e36eae71","Type":"ContainerStarted","Data":"c835077e30e395f5280b0900a8206b00eb52383c19c87a370d7e042619ebeed2"} Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.355638 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6e9a689c-66c4-46e8-ba11-951ab4eaf0d1-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w5xjp\" (UID: \"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.370347 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.381983 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59xm7\" (UniqueName: \"kubernetes.io/projected/d1456961-1af8-402a-9ebd-9cb419b85701-kube-api-access-59xm7\") pod \"etcd-operator-b45778765-ld5r8\" (UID: \"d1456961-1af8-402a-9ebd-9cb419b85701\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.404731 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.413027 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.414020 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7dvlp"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.414912 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhrg\" (UniqueName: \"kubernetes.io/projected/4aa0d62b-e9d2-4163-beb7-74d499965b65-kube-api-access-nkhrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-zgbpf\" (UID: \"4aa0d62b-e9d2-4163-beb7-74d499965b65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.432262 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.437158 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp9bx\" (UniqueName: \"kubernetes.io/projected/48991717-d2c1-427b-8b52-dc549b2e87d9-kube-api-access-jp9bx\") pod \"machine-config-controller-84d6567774-z87xh\" (UID: \"48991717-d2c1-427b-8b52-dc549b2e87d9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.438509 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-grgf5"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.450533 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.450926 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhxn\" (UniqueName: \"kubernetes.io/projected/e853a192-370e-4deb-9668-671fd221b7fc-kube-api-access-gbhxn\") pod \"service-ca-operator-777779d784-p5h6z\" (UID: \"e853a192-370e-4deb-9668-671fd221b7fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.454918 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.467883 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.469665 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.482470 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.483176 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzq8\" (UniqueName: \"kubernetes.io/projected/8d4807a1-3e06-4ec0-9e43-70d5c9755f61-kube-api-access-wnzq8\") pod \"olm-operator-6b444d44fb-7c4dl\" (UID: \"8d4807a1-3e06-4ec0-9e43-70d5c9755f61\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.483369 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.483519 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h994k"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.487879 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.495264 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxfp6\" (UniqueName: \"kubernetes.io/projected/a5d2d1e9-c405-4de7-8a8f-18180c41f64f-kube-api-access-hxfp6\") pod \"csi-hostpathplugin-8j7l7\" (UID: \"a5d2d1e9-c405-4de7-8a8f-18180c41f64f\") " pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.495746 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w8qs9"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.497916 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crvg\" (UniqueName: \"kubernetes.io/projected/ba274694-159c-4f63-9aff-54ba10d6f5ed-kube-api-access-9crvg\") pod \"openshift-config-operator-7777fb866f-9z9h5\" (UID: \"ba274694-159c-4f63-9aff-54ba10d6f5ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.508976 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.515062 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.523294 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ljd\" (UniqueName: \"kubernetes.io/projected/fc3dbe01-fdbc-4393-9a35-7f6244c4f385-kube-api-access-x7ljd\") pod \"multus-admission-controller-857f4d67dd-4c2k4\" (UID: \"fc3dbe01-fdbc-4393-9a35-7f6244c4f385\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.524146 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4bh4"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.536029 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.560342 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582596 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw67j\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582634 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582650 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-apiservice-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582666 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqlk\" (UniqueName: \"kubernetes.io/projected/a31e8874-9bb7-486f-969a-ad85fd41895c-kube-api-access-pkqlk\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582699 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582717 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4qm\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-kube-api-access-kl4qm\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582790 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582855 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582877 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582893 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a31e8874-9bb7-486f-969a-ad85fd41895c-tmpfs\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582908 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582924 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-webhook-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582939 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8f8\" (UniqueName: \"kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582953 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582970 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.582985 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c745995-ccb1-4229-b176-0e1af03d6c6d-metrics-tls\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.583000 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.583015 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.583036 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.583068 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: E0129 12:08:24.583444 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.083429253 +0000 UTC m=+142.306371385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.584647 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c745995-ccb1-4229-b176-0e1af03d6c6d-trusted-ca\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.584701 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.589646 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.589781 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.597494 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.628949 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690481 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690744 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-webhook-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690767 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-config-volume\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690802 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8f8\" (UniqueName: \"kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690823 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn45l\" (UniqueName: \"kubernetes.io/projected/3703c6e8-170d-4aa1-978a-955d564ab30a-kube-api-access-fn45l\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690847 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690895 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690930 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c745995-ccb1-4229-b176-0e1af03d6c6d-metrics-tls\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.690956 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.691002 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.691115 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.691194 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4qm\" (UniqueName: \"kubernetes.io/projected/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-kube-api-access-wg4qm\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.691209 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-metrics-tls\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.691257 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c745995-ccb1-4229-b176-0e1af03d6c6d-trusted-ca\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693085 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: E0129 12:08:24.693289 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.193264646 +0000 UTC m=+142.416206808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693470 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693541 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw67j\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693597 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-apiservice-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693618 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693635 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqlk\" (UniqueName: \"kubernetes.io/projected/a31e8874-9bb7-486f-969a-ad85fd41895c-kube-api-access-pkqlk\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693685 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-cert\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693783 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693820 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-node-bootstrap-token\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693870 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl4qm\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-kube-api-access-kl4qm\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693899 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.693913 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-certs\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.694055 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.694098 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.694116 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a31e8874-9bb7-486f-969a-ad85fd41895c-tmpfs\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.694137 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666vd\" (UniqueName: \"kubernetes.io/projected/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-kube-api-access-666vd\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.694159 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.696119 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.696457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.696857 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.697035 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c745995-ccb1-4229-b176-0e1af03d6c6d-trusted-ca\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.697192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.697531 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.702666 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.705220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.709721 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.710134 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.710251 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a31e8874-9bb7-486f-969a-ad85fd41895c-tmpfs\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.710964 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.711916 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c745995-ccb1-4229-b176-0e1af03d6c6d-metrics-tls\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.713320 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-webhook-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.713940 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a31e8874-9bb7-486f-969a-ad85fd41895c-apiservice-cert\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.715013 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.718012 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.744575 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.769202 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8f8\" (UniqueName: \"kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8\") pod \"console-f9d7485db-tvjqj\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:24 crc kubenswrapper[4660]: W0129 12:08:24.788645 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde2be07_f4f9_4868_801f_4a0b650a5b7f.slice/crio-b85c0d1c9f9e935e9083d37180ae66f740082e94484439d2f38e10ea0389753c WatchSource:0}: Error finding container b85c0d1c9f9e935e9083d37180ae66f740082e94484439d2f38e10ea0389753c: Status 404 returned error can't find the container with id b85c0d1c9f9e935e9083d37180ae66f740082e94484439d2f38e10ea0389753c Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.793649 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl4qm\" (UniqueName: \"kubernetes.io/projected/7c745995-ccb1-4229-b176-0e1af03d6c6d-kube-api-access-kl4qm\") pod \"ingress-operator-5b745b69d9-65ffv\" (UID: \"7c745995-ccb1-4229-b176-0e1af03d6c6d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795463 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-cert\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795504 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-node-bootstrap-token\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795525 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-certs\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795563 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666vd\" (UniqueName: \"kubernetes.io/projected/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-kube-api-access-666vd\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795589 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-config-volume\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795612 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn45l\" (UniqueName: \"kubernetes.io/projected/3703c6e8-170d-4aa1-978a-955d564ab30a-kube-api-access-fn45l\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795665 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795704 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4qm\" (UniqueName: \"kubernetes.io/projected/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-kube-api-access-wg4qm\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.795723 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-metrics-tls\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: E0129 12:08:24.799882 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.299866264 +0000 UTC m=+142.522808396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.800646 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-config-volume\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.809474 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-cert\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.810837 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-node-bootstrap-token\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.813130 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-metrics-tls\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.813581 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3703c6e8-170d-4aa1-978a-955d564ab30a-certs\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.826500 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqlk\" (UniqueName: \"kubernetes.io/projected/a31e8874-9bb7-486f-969a-ad85fd41895c-kube-api-access-pkqlk\") pod \"packageserver-d55dfcdfc-wz7n4\" (UID: \"a31e8874-9bb7-486f-969a-ad85fd41895c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.843958 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.853795 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9zrm5"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.856032 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw67j\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.877371 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.899731 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:24 crc kubenswrapper[4660]: E0129 12:08:24.899889 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.399870988 +0000 UTC m=+142.622813120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.900238 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.900706 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l"] Jan 29 12:08:24 crc kubenswrapper[4660]: E0129 12:08:24.900890 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.400883117 +0000 UTC m=+142.623825249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.913742 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h"] Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.915725 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666vd\" (UniqueName: \"kubernetes.io/projected/d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c-kube-api-access-666vd\") pod \"ingress-canary-l6j2g\" (UID: \"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c\") " pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.919996 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn45l\" (UniqueName: \"kubernetes.io/projected/3703c6e8-170d-4aa1-978a-955d564ab30a-kube-api-access-fn45l\") pod \"machine-config-server-kjbsr\" (UID: \"3703c6e8-170d-4aa1-978a-955d564ab30a\") " pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.940939 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjbsr" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.951094 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l6j2g" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.954639 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4qm\" (UniqueName: \"kubernetes.io/projected/e2fe5735-2f78-4963-8f44-8ed9ebc8549d-kube-api-access-wg4qm\") pod \"dns-default-wtvmf\" (UID: \"e2fe5735-2f78-4963-8f44-8ed9ebc8549d\") " pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:24 crc kubenswrapper[4660]: I0129 12:08:24.993824 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.001376 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.001552 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.5015244 +0000 UTC m=+142.724466532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.001714 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.001970 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.501958363 +0000 UTC m=+142.724900495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: W0129 12:08:25.050669 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3703c6e8_170d_4aa1_978a_955d564ab30a.slice/crio-c542a497e4e4b1d49c87c016db2dc37c9aeeb85a42fb8557a6c228f602db9d75 WatchSource:0}: Error finding container c542a497e4e4b1d49c87c016db2dc37c9aeeb85a42fb8557a6c228f602db9d75: Status 404 returned error can't find the container with id c542a497e4e4b1d49c87c016db2dc37c9aeeb85a42fb8557a6c228f602db9d75 Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.065159 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mcjsl"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.070268 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.102707 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.103218 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.603203973 +0000 UTC m=+142.826146105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.116628 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-knwfb"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.205324 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.205820 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.705805824 +0000 UTC m=+142.928747956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.243534 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:25 crc kubenswrapper[4660]: W0129 12:08:25.296483 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod398de0cd_73e3_46c2_9ed1_4921e64fe50b.slice/crio-950ecebce29408e83b1b1f37006af5a6803b46b86188cda5e127656dcb59e7a3 WatchSource:0}: Error finding container 950ecebce29408e83b1b1f37006af5a6803b46b86188cda5e127656dcb59e7a3: Status 404 returned error can't find the container with id 950ecebce29408e83b1b1f37006af5a6803b46b86188cda5e127656dcb59e7a3 Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.306339 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.307573 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.807485167 +0000 UTC m=+143.030427299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.307611 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.307978 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.807968781 +0000 UTC m=+143.030910913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.410226 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.410569 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:25.910554911 +0000 UTC m=+143.133497043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.411152 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-grgf5" event={"ID":"5d163d7f-d49a-4487-9a62-a094182ac910","Type":"ContainerStarted","Data":"55d5df6f65f6260e11df2af4efce989196bd0805e10ab210eb3a9ff1d440c86d"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.418170 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" event={"ID":"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54","Type":"ContainerStarted","Data":"787718d4701c287771750ec76f10ca6a3b1d9af570bc84a582f497e2423dbb34"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.517722 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" event={"ID":"394d4964-a60d-4ad5-90e7-44a7e36eae71","Type":"ContainerStarted","Data":"0e60ac6e002ce4f6b558e4515ff65137024828f737ba1381cb3922b4eade907d"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.517946 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.518092 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.518193 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.520750 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.524373 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.02435485 +0000 UTC m=+143.247297032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.529913 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.534513 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hbts9" event={"ID":"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd","Type":"ContainerStarted","Data":"e25169c2510f165ad584cf6db10643b3aacfbc8e457fb9b4306d945308ca256c"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.534573 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hbts9" event={"ID":"8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd","Type":"ContainerStarted","Data":"3d610ec9db5c8ad0dc02075bd55bb9b63e946b477fc0923153f7613ee1a9ebce"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.548281 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" event={"ID":"dde2be07-f4f9-4868-801f-4a0b650a5b7f","Type":"ContainerStarted","Data":"b85c0d1c9f9e935e9083d37180ae66f740082e94484439d2f38e10ea0389753c"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.549923 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" event={"ID":"4064e41f-ba1f-4b56-ac8b-2b50579d0953","Type":"ContainerStarted","Data":"cf9457239b3db52e008c15bd4fdd197db2131fa4874be476ae869682ef619e93"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.550563 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" event={"ID":"a749f255-61ea-4370-87fc-2d13276383b0","Type":"ContainerStarted","Data":"0f6edc7b69646733d746f2f4e2df62fdcf46bd3f96bffce32c6908f37b7294e7"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.552776 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" event={"ID":"1afa8f6d-9033-41f9-b30c-4ce3b4b56399","Type":"ContainerStarted","Data":"690416dd8f87c6dfb4e6b701d967db3ce8fe84051d0ba644c0397b68d09473de"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.552802 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" event={"ID":"1afa8f6d-9033-41f9-b30c-4ce3b4b56399","Type":"ContainerStarted","Data":"3aaf203a06137b93ee73bd8ee9659c42bf5b4c25c90481c240c69deb6b12a39c"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.573152 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" event={"ID":"398de0cd-73e3-46c2-9ed1-4921e64fe50b","Type":"ContainerStarted","Data":"950ecebce29408e83b1b1f37006af5a6803b46b86188cda5e127656dcb59e7a3"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.575433 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" event={"ID":"5c4cd802-1046-46c7-9168-0a281e7e92b2","Type":"ContainerStarted","Data":"c07a7dce09afbc3c6c7a962fd8d4de7e62c0d358754dd488bb8665b7dc3449b9"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.580248 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjbsr" event={"ID":"3703c6e8-170d-4aa1-978a-955d564ab30a","Type":"ContainerStarted","Data":"c542a497e4e4b1d49c87c016db2dc37c9aeeb85a42fb8557a6c228f602db9d75"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.588246 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" event={"ID":"c5c9df79-c602-4d20-9b74-c96f479d0f03","Type":"ContainerStarted","Data":"bac7a5c64aa8b14c114aaa056f7adfd2ccf3d224ab4276b7b9053a4d7676d822"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.600127 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" event={"ID":"101df3f7-db56-4a33-a0ec-d513f6785dde","Type":"ContainerStarted","Data":"6b5100f5876b622c9365484dc73b8ad90536004ec5d7206a666193ae7aaf4531"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.603913 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" event={"ID":"0cafd071-62fd-4601-a1ec-6ff85b52d0f5","Type":"ContainerStarted","Data":"5822d19d3778b9ff7f660cc7f70313d531a3c376e48c3ebdc2cb9af86ab25370"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.624804 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.627774 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.127748263 +0000 UTC m=+143.350690435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: W0129 12:08:25.633489 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7244f40b_2b72_48e2_bd02_5fdc718a460b.slice/crio-423d3fa305a1a608f7748883b8b171a9c149663c63188a57002bccbbf090cf32 WatchSource:0}: Error finding container 423d3fa305a1a608f7748883b8b171a9c149663c63188a57002bccbbf090cf32: Status 404 returned error can't find the container with id 423d3fa305a1a608f7748883b8b171a9c149663c63188a57002bccbbf090cf32 Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.637497 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" event={"ID":"6c466078-87ee-40ea-83ee-11aa309b065f","Type":"ContainerStarted","Data":"d9e88d974e92afdb9db1c92b68f18bc8db025372fa6fc610d43568e02c3ce06b"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.643532 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" event={"ID":"1362cdda-d5a4-416f-8f65-6a631433d1ef","Type":"ContainerStarted","Data":"a6b04ac51ac66da9946c357e81879016da9a74433dea891406241240bd15c3b8"} Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.696307 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" event={"ID":"9992b62a-1c70-4213-b515-009b40aa326e","Type":"ContainerStarted","Data":"2377361593f1769350c15365173c101bed6c0c4e9645eb4305ddff5d95d5c178"} Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.731382 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.231367233 +0000 UTC m=+143.454309365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.731415 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.832855 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.840230 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.340184726 +0000 UTC m=+143.563126868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.852311 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.852900 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.352886288 +0000 UTC m=+143.575828430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.953348 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:25 crc kubenswrapper[4660]: E0129 12:08:25.953778 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.453763408 +0000 UTC m=+143.676705540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.978043 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf"] Jan 29 12:08:25 crc kubenswrapper[4660]: I0129 12:08:25.992665 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.011682 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4c2k4"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.032028 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.052379 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.056984 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.057313 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.557300365 +0000 UTC m=+143.780242497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.088985 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.157830 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.158267 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.658232636 +0000 UTC m=+143.881174768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.162322 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ld5r8"] Jan 29 12:08:26 crc kubenswrapper[4660]: W0129 12:08:26.199599 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa0d62b_e9d2_4163_beb7_74d499965b65.slice/crio-7c4619890b3e3f09ff0dd50214fee47b68a6e064cf270150e636d3a48f735190 WatchSource:0}: Error finding container 7c4619890b3e3f09ff0dd50214fee47b68a6e064cf270150e636d3a48f735190: Status 404 returned error can't find the container with id 7c4619890b3e3f09ff0dd50214fee47b68a6e064cf270150e636d3a48f735190 Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.202633 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.202903 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.202937 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 12:08:26 crc kubenswrapper[4660]: W0129 12:08:26.236709 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9a689c_66c4_46e8_ba11_951ab4eaf0d1.slice/crio-e3118c0a4b6966c55d50553f99ea6b5304b77894e2b473c7f7bfb95230db50b2 WatchSource:0}: Error finding container e3118c0a4b6966c55d50553f99ea6b5304b77894e2b473c7f7bfb95230db50b2: Status 404 returned error can't find the container with id e3118c0a4b6966c55d50553f99ea6b5304b77894e2b473c7f7bfb95230db50b2 Jan 29 12:08:26 crc kubenswrapper[4660]: W0129 12:08:26.244519 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3064f75_3f20_425e_94fc_9b2db0147c1d.slice/crio-903e1f7afe154de5d14cec400f7b15992cf8a10488d45d84f41fe098ae3251fd WatchSource:0}: Error finding container 903e1f7afe154de5d14cec400f7b15992cf8a10488d45d84f41fe098ae3251fd: Status 404 returned error can't find the container with id 903e1f7afe154de5d14cec400f7b15992cf8a10488d45d84f41fe098ae3251fd Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.259376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.261794 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.761761594 +0000 UTC m=+143.984703726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.272940 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.273002 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.359222 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hbts9" podStartSLOduration=124.359204773 podStartE2EDuration="2m4.359204773s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:26.356310778 +0000 UTC m=+143.579252910" watchObservedRunningTime="2026-01-29 12:08:26.359204773 +0000 UTC m=+143.582146905" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.360416 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.360665 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.860624444 +0000 UTC m=+144.083566616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.361248 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.361937 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.861922102 +0000 UTC m=+144.084864234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.423217 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.450564 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.459715 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.462032 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.462512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.473443 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:26.973404783 +0000 UTC m=+144.196346915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.502274 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.521221 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.556933 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.566968 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.568238 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.568552 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.068537294 +0000 UTC m=+144.291479426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.571550 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8j7l7"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.578277 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l6j2g"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.586429 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wtvmf"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.589134 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z"] Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.628115 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:08:26 crc kubenswrapper[4660]: W0129 12:08:26.639333 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d92dd16_4a3a_42f4_9260_241ab774a2ea.slice/crio-88d89a9803c2dbb4debfdd60a94b8aa4fac154bb9f9e64ab020afa11ee623d17 WatchSource:0}: Error finding container 88d89a9803c2dbb4debfdd60a94b8aa4fac154bb9f9e64ab020afa11ee623d17: Status 404 returned error can't find the container with id 88d89a9803c2dbb4debfdd60a94b8aa4fac154bb9f9e64ab020afa11ee623d17 Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.668856 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.669174 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.169157396 +0000 UTC m=+144.392099528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.720561 4660 generic.go:334] "Generic (PLEG): container finished" podID="9992b62a-1c70-4213-b515-009b40aa326e" containerID="e1caddf879c4755f331391df0863713b56db59d4c208686ab45f7c6be1af4ccd" exitCode=0 Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.720796 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" event={"ID":"9992b62a-1c70-4213-b515-009b40aa326e","Type":"ContainerDied","Data":"e1caddf879c4755f331391df0863713b56db59d4c208686ab45f7c6be1af4ccd"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.746286 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" event={"ID":"ba274694-159c-4f63-9aff-54ba10d6f5ed","Type":"ContainerStarted","Data":"c968cde0371e30c05fbb9275bb069c9f6665ede0ebccabcfd40b3b62eb21656b"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.752674 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l6j2g" event={"ID":"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c","Type":"ContainerStarted","Data":"39287d703150cd57fc36b65fdebbbc5843729ab5d0717c31fb1de371180144cc"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.757783 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" event={"ID":"e853a192-370e-4deb-9668-671fd221b7fc","Type":"ContainerStarted","Data":"fdd672ef562aff6e23bc585e8d6e06c7bf3b4ebe5993f2791ce13679c82f153c"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.761507 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" event={"ID":"5c4cd802-1046-46c7-9168-0a281e7e92b2","Type":"ContainerStarted","Data":"1bb709f03937d4da6f3a8c551d342675c88cdef9d72d0be7e591559010499357"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.764650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" event={"ID":"4aa0d62b-e9d2-4163-beb7-74d499965b65","Type":"ContainerStarted","Data":"7c4619890b3e3f09ff0dd50214fee47b68a6e064cf270150e636d3a48f735190"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.770225 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.770656 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.270620723 +0000 UTC m=+144.493562855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.771843 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" event={"ID":"0cafd071-62fd-4601-a1ec-6ff85b52d0f5","Type":"ContainerStarted","Data":"458e9ba815ce8cccec333cdf06b4fdc31299bc40572bf49f991b141c84dd52be"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.797778 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-44dmm" podStartSLOduration=124.797757409 podStartE2EDuration="2m4.797757409s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:26.796312157 +0000 UTC m=+144.019254349" watchObservedRunningTime="2026-01-29 12:08:26.797757409 +0000 UTC m=+144.020699561" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.871751 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.871884 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.371868834 +0000 UTC m=+144.594810966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.872206 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.873319 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.373311386 +0000 UTC m=+144.596253518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.888237 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" event={"ID":"48991717-d2c1-427b-8b52-dc549b2e87d9","Type":"ContainerStarted","Data":"63061ec915ce20d0db62694716223ace97c7a154e83a6b3e23c25a443868cd95"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.891794 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" event={"ID":"d1456961-1af8-402a-9ebd-9cb419b85701","Type":"ContainerStarted","Data":"dcf4733d193636c965ab1d736ac425ce68f53e56d02847c12d2850fbbffea47e"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.901820 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" event={"ID":"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8","Type":"ContainerStarted","Data":"17583a1ee8972e5e9d681d075aa6fefa8462e32c7b0baa7e6b38f9e3ad0e37fa"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.917548 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-grgf5" event={"ID":"5d163d7f-d49a-4487-9a62-a094182ac910","Type":"ContainerStarted","Data":"d0f20488c07a411e01bcfd7cbb0eba8a3507c5ce378b32e8e016f8ee024b06fb"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.918230 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.922206 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.922257 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.949516 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-grgf5" podStartSLOduration=124.949478481 podStartE2EDuration="2m4.949478481s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:26.948646507 +0000 UTC m=+144.171588639" watchObservedRunningTime="2026-01-29 12:08:26.949478481 +0000 UTC m=+144.172420613" Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.967634 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" event={"ID":"d78c77cc-e724-4f09-b613-08983a1d2658","Type":"ContainerStarted","Data":"55c95f102059916a3a5b23c7fa200f76a630c03c3423dbe7afa6da44cf6fc1f1"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.976875 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.977101 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.47706999 +0000 UTC m=+144.700012122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.977155 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:26 crc kubenswrapper[4660]: E0129 12:08:26.977548 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.477536204 +0000 UTC m=+144.700478336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.978330 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvjqj" event={"ID":"b177214f-7d4c-4f4f-8741-3a2695d1c495","Type":"ContainerStarted","Data":"312f2a0a6faf6dc5eb08908b173df3c71af0bef6bd831deb9a5b2fc17159bd7f"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.992132 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" event={"ID":"8d4807a1-3e06-4ec0-9e43-70d5c9755f61","Type":"ContainerStarted","Data":"5c7551ac95dd056cd2593db60c8e798ec119f4ec4b3ff9e13d0f5dd6417b1af2"} Jan 29 12:08:26 crc kubenswrapper[4660]: I0129 12:08:26.996860 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjbsr" event={"ID":"3703c6e8-170d-4aa1-978a-955d564ab30a","Type":"ContainerStarted","Data":"89a276fe97620b759f3e872dc44e55c88b087804829122007689c56ed1282eed"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.001371 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" event={"ID":"a31e8874-9bb7-486f-969a-ad85fd41895c","Type":"ContainerStarted","Data":"5cfd681b90878f1d2e9303ff0c96b033bb5066166bb0bbc0c4facaf6672fd576"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.017064 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kjbsr" podStartSLOduration=6.017044633 podStartE2EDuration="6.017044633s" podCreationTimestamp="2026-01-29 12:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.015062915 +0000 UTC m=+144.238005057" watchObservedRunningTime="2026-01-29 12:08:27.017044633 +0000 UTC m=+144.239986765" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.027910 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" event={"ID":"2a6abc12-af78-4433-84ae-8421ade4d80c","Type":"ContainerStarted","Data":"ac5cfe210741ee473e054f0449469ef0b4df5994070032478ec10765b56c9700"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.030839 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" event={"ID":"5d92dd16-4a3a-42f4-9260-241ab774a2ea","Type":"ContainerStarted","Data":"88d89a9803c2dbb4debfdd60a94b8aa4fac154bb9f9e64ab020afa11ee623d17"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.032542 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" event={"ID":"fc3dbe01-fdbc-4393-9a35-7f6244c4f385","Type":"ContainerStarted","Data":"24c4f857502c705f2b709cf0f1c86b9ae5a6747348e248eb4e8e9f7217149ad5"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.035482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" event={"ID":"dde2be07-f4f9-4868-801f-4a0b650a5b7f","Type":"ContainerStarted","Data":"52a2b7c906812812b108af6298e6aa9a34c47711e2e7a368db28187799b33d27"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.037915 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.041089 4660 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-h994k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.041164 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.046722 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" event={"ID":"7c745995-ccb1-4229-b176-0e1af03d6c6d","Type":"ContainerStarted","Data":"586af559779a8b5b6ab47c58e2b833bf9e6261171118a528f3cca981dad07455"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.067125 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" podStartSLOduration=125.067105282 podStartE2EDuration="2m5.067105282s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.059102427 +0000 UTC m=+144.282044579" watchObservedRunningTime="2026-01-29 12:08:27.067105282 +0000 UTC m=+144.290047414" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.073554 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" event={"ID":"4ab0e310-7e1b-404d-b763-b6813d39d49d","Type":"ContainerStarted","Data":"5d3444fbe5e67f21937654779a0690e30b77cbb09c3a9b5ada3a8ba3fdafcb64"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.079035 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.079321 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.57930203 +0000 UTC m=+144.802244172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.086734 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" event={"ID":"398de0cd-73e3-46c2-9ed1-4921e64fe50b","Type":"ContainerStarted","Data":"b9403873e091e2f235e2f48d9c4240c64eacd08070f3853fa529d2b8be7ed9da"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.087913 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.088647 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.588632924 +0000 UTC m=+144.811575066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.107056 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" event={"ID":"486306e9-968b-4d73-932e-8f12efbf3204","Type":"ContainerStarted","Data":"40f87029722296a3a666d73671af95fe1c0f1bbb8d63f4fe9acdd6597bb1b3c1"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.108118 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.114500 4660 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dhkss container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.114594 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" podUID="486306e9-968b-4d73-932e-8f12efbf3204" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.124987 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" event={"ID":"7244f40b-2b72-48e2-bd02-5fdc718a460b","Type":"ContainerStarted","Data":"423d3fa305a1a608f7748883b8b171a9c149663c63188a57002bccbbf090cf32"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.140293 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-knwfb" podStartSLOduration=125.140267879 podStartE2EDuration="2m5.140267879s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.133208441 +0000 UTC m=+144.356150583" watchObservedRunningTime="2026-01-29 12:08:27.140267879 +0000 UTC m=+144.363210011" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.157332 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" event={"ID":"c5c9df79-c602-4d20-9b74-c96f479d0f03","Type":"ContainerStarted","Data":"4af7d1db8479aee3e6ee0592f9d5600e2084e6a03d64b2bbcdc1681ed0534bf9"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.157603 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.159364 4660 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wc4dc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.159426 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.177314 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" podStartSLOduration=124.177289345 podStartE2EDuration="2m4.177289345s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.172322249 +0000 UTC m=+144.395264391" watchObservedRunningTime="2026-01-29 12:08:27.177289345 +0000 UTC m=+144.400231487" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.187151 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" event={"ID":"a5d2d1e9-c405-4de7-8a8f-18180c41f64f","Type":"ContainerStarted","Data":"00db114179526e18dfbd694ca7c63d21bbb672cf109b7464f1d0f1c787f0a455"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.189777 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.192027 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.691747479 +0000 UTC m=+144.914689611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.200755 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" event={"ID":"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1","Type":"ContainerStarted","Data":"e3118c0a4b6966c55d50553f99ea6b5304b77894e2b473c7f7bfb95230db50b2"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.207002 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:27 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:27 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:27 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.207051 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.207424 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" podStartSLOduration=124.207409368 podStartE2EDuration="2m4.207409368s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.204011529 +0000 UTC m=+144.426953661" watchObservedRunningTime="2026-01-29 12:08:27.207409368 +0000 UTC m=+144.430351500" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.236316 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" podStartSLOduration=124.236295546 podStartE2EDuration="2m4.236295546s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.221595965 +0000 UTC m=+144.444538097" watchObservedRunningTime="2026-01-29 12:08:27.236295546 +0000 UTC m=+144.459237678" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.237839 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" event={"ID":"b3064f75-3f20-425e-94fc-9b2db0147c1d","Type":"ContainerStarted","Data":"903e1f7afe154de5d14cec400f7b15992cf8a10488d45d84f41fe098ae3251fd"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.245131 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" event={"ID":"394d4964-a60d-4ad5-90e7-44a7e36eae71","Type":"ContainerStarted","Data":"6dbc83ffff2cc15df19c95b2224470771201d56f1299ca9e5485a824f63ec097"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.250477 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" event={"ID":"101df3f7-db56-4a33-a0ec-d513f6785dde","Type":"ContainerStarted","Data":"bd69a11a32bff522470509bca7fe9109eba6f652e07fb349bd3dc8585ba42996"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.251620 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.260575 4660 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ccbzg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.260659 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.271201 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" event={"ID":"1afa8f6d-9033-41f9-b30c-4ce3b4b56399","Type":"ContainerStarted","Data":"f0991447d13b4592bf3e18191bc0b00e1a7e91dc5d12f425d6bd984b8f32884b"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.278040 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pc48d" podStartSLOduration=125.27801619 podStartE2EDuration="2m5.27801619s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.276606079 +0000 UTC m=+144.499548211" watchObservedRunningTime="2026-01-29 12:08:27.27801619 +0000 UTC m=+144.500958332" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.283601 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" event={"ID":"170df6ed-3e79-4577-ba9f-20e4c075128c","Type":"ContainerStarted","Data":"c319056783402f77a1ad5daa4cf0afccacd7726f3a55950a0d18e7386edd7685"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.293325 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.295166 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wtvmf" event={"ID":"e2fe5735-2f78-4963-8f44-8ed9ebc8549d","Type":"ContainerStarted","Data":"4ff5ced6966b2cb627a38c0ae773a51d9084e54d6a5180e7ac1af414202065d9"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.295995 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" podStartSLOduration=125.295982827 podStartE2EDuration="2m5.295982827s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.29472541 +0000 UTC m=+144.517667542" watchObservedRunningTime="2026-01-29 12:08:27.295982827 +0000 UTC m=+144.518924959" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.301204 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.8011879 +0000 UTC m=+145.024130022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.303549 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" event={"ID":"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba","Type":"ContainerStarted","Data":"b33dc66994f15c1c55c490e31beaf0047109ceec97bf6cec032849d438c32aa9"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.345542 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" event={"ID":"a749f255-61ea-4370-87fc-2d13276383b0","Type":"ContainerStarted","Data":"39adab438d06a7b3fb6148fa599a2d13c360bbd82c3e2d4709610bb73061f9ce"} Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.345849 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.349904 4660 patch_prober.go:28] interesting pod/console-operator-58897d9998-w8qs9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.350067 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" podUID="a749f255-61ea-4370-87fc-2d13276383b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.364263 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7dvlp" podStartSLOduration=124.364244 podStartE2EDuration="2m4.364244s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.317063306 +0000 UTC m=+144.540005438" watchObservedRunningTime="2026-01-29 12:08:27.364244 +0000 UTC m=+144.587186132" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.369839 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" podStartSLOduration=125.365112345 podStartE2EDuration="2m5.365112345s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:27.363532669 +0000 UTC m=+144.586474801" watchObservedRunningTime="2026-01-29 12:08:27.365112345 +0000 UTC m=+144.588054467" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.392343 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.409662 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.409803 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.909780206 +0000 UTC m=+145.132722358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.410298 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.411427 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:27.911411834 +0000 UTC m=+145.134354046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.511282 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.513071 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.013054566 +0000 UTC m=+145.235996688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.613805 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.614551 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.114535353 +0000 UTC m=+145.337477485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.715101 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.715607 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.215587628 +0000 UTC m=+145.438529760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.816178 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.816569 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.31653537 +0000 UTC m=+145.539477502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.920829 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.921093 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.421073647 +0000 UTC m=+145.644015779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:27 crc kubenswrapper[4660]: I0129 12:08:27.921136 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:27 crc kubenswrapper[4660]: E0129 12:08:27.921390 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.421381716 +0000 UTC m=+145.644323848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.028397 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.028558 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.528532 +0000 UTC m=+145.751474132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.028667 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.029089 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.529073156 +0000 UTC m=+145.752015288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.130418 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.130839 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.63081063 +0000 UTC m=+145.853752772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.130999 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.131309 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.631293465 +0000 UTC m=+145.854236327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.219185 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:28 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:28 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:28 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.219249 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.231738 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.232150 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.732134273 +0000 UTC m=+145.955076405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.334760 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.335140 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.835128775 +0000 UTC m=+146.058070907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.367633 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" event={"ID":"4ab0e310-7e1b-404d-b763-b6813d39d49d","Type":"ContainerStarted","Data":"a068a37e9704e42fb55c5b22c3ff115a831109445adef1ad3ff287bc3fce86f1"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.374475 4660 generic.go:334] "Generic (PLEG): container finished" podID="6c466078-87ee-40ea-83ee-11aa309b065f" containerID="b0ee61157ecabafbf7075f599669656dcbe634783d8475c47f445f51f5cca10d" exitCode=0 Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.374554 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" event={"ID":"6c466078-87ee-40ea-83ee-11aa309b065f","Type":"ContainerDied","Data":"b0ee61157ecabafbf7075f599669656dcbe634783d8475c47f445f51f5cca10d"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.392514 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" event={"ID":"a31e8874-9bb7-486f-969a-ad85fd41895c","Type":"ContainerStarted","Data":"f690c4d06933a2aa008993dea77603b4bff3258bfb1a70ad281a4152359768e5"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.393363 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.394931 4660 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wz7n4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.395057 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" podUID="a31e8874-9bb7-486f-969a-ad85fd41895c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.406442 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" event={"ID":"d1456961-1af8-402a-9ebd-9cb419b85701","Type":"ContainerStarted","Data":"b72d9f572dc2bed7b17d9d13c3de0b26936c89a1de5efdee2b8f8d84a96c8593"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.428598 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" event={"ID":"5c4cd802-1046-46c7-9168-0a281e7e92b2","Type":"ContainerStarted","Data":"0d3556fed21d00d9ac01999bfdc2b06a2b72a001b3500bab68a324b5819bfc99"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.431617 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" event={"ID":"fc3dbe01-fdbc-4393-9a35-7f6244c4f385","Type":"ContainerStarted","Data":"b66ce45e6fa28fb725c471d820692df2615633dd811bb8399c0e721722e2509a"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.433205 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l6j2g" event={"ID":"d0cadab0-0302-4b01-8cc8-9e0bef8ceb7c","Type":"ContainerStarted","Data":"2673addc3dd7696358176d009ce4f98266cde00dae3030c0a969e9acdab71c80"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.435576 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.436231 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:28.936211121 +0000 UTC m=+146.159153263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.445456 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" podStartSLOduration=125.445435581 podStartE2EDuration="2m5.445435581s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.442560097 +0000 UTC m=+145.665502229" watchObservedRunningTime="2026-01-29 12:08:28.445435581 +0000 UTC m=+145.668377713" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.473113 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ld5r8" podStartSLOduration=126.473095583 podStartE2EDuration="2m6.473095583s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.471932099 +0000 UTC m=+145.694874231" watchObservedRunningTime="2026-01-29 12:08:28.473095583 +0000 UTC m=+145.696037715" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.507550 4660 generic.go:334] "Generic (PLEG): container finished" podID="ba274694-159c-4f63-9aff-54ba10d6f5ed" containerID="166f44272a5169c837d8778c2d0dafe6144c1b4406981d4c8bc153aad7f979ef" exitCode=0 Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.507624 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" event={"ID":"ba274694-159c-4f63-9aff-54ba10d6f5ed","Type":"ContainerDied","Data":"166f44272a5169c837d8778c2d0dafe6144c1b4406981d4c8bc153aad7f979ef"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.529709 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-v4bh4" podStartSLOduration=126.529673853 podStartE2EDuration="2m6.529673853s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.529212759 +0000 UTC m=+145.752154901" watchObservedRunningTime="2026-01-29 12:08:28.529673853 +0000 UTC m=+145.752615985" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.537035 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.538300 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.038281205 +0000 UTC m=+146.261223337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.555213 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" event={"ID":"8d4807a1-3e06-4ec0-9e43-70d5c9755f61","Type":"ContainerStarted","Data":"68ff4bb7ef7048e66da993e9dbb567ec2d32fab4fca9ad9b4e2d7230756c54ba"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.556178 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.557519 4660 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7c4dl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.557559 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" podUID="8d4807a1-3e06-4ec0-9e43-70d5c9755f61" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.580764 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" event={"ID":"4aa0d62b-e9d2-4163-beb7-74d499965b65","Type":"ContainerStarted","Data":"c6a85d2f06d82a59764f3777ab93eb7409dedb4bec8a102175deed2c9fd8a5c5"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.620563 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zgbpf" podStartSLOduration=126.620546189 podStartE2EDuration="2m6.620546189s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.620181238 +0000 UTC m=+145.843123370" watchObservedRunningTime="2026-01-29 12:08:28.620546189 +0000 UTC m=+145.843488321" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.622393 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l6j2g" podStartSLOduration=7.622385133 podStartE2EDuration="7.622385133s" podCreationTimestamp="2026-01-29 12:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.581539605 +0000 UTC m=+145.804481747" watchObservedRunningTime="2026-01-29 12:08:28.622385133 +0000 UTC m=+145.845327265" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.632046 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" event={"ID":"170df6ed-3e79-4577-ba9f-20e4c075128c","Type":"ContainerStarted","Data":"2c96ab8ab6cf3771f65ccce009e3e6b6c496a0c6f7488a128aecf05ed0fc868d"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.632103 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" event={"ID":"170df6ed-3e79-4577-ba9f-20e4c075128c","Type":"ContainerStarted","Data":"df826d00ce3a245b5337cd35c55a7c3464b8eb4189e5f204613b916ed08084b1"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.632879 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.641306 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.642261 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.142238055 +0000 UTC m=+146.365180177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.680203 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" event={"ID":"6e9a689c-66c4-46e8-ba11-951ab4eaf0d1","Type":"ContainerStarted","Data":"2237fbb6bebc451ab8790e8f377931569bd995de2758ed978ade215a16ed2d3e"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.701975 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" podStartSLOduration=125.701953347 podStartE2EDuration="2m5.701953347s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.699598788 +0000 UTC m=+145.922540930" watchObservedRunningTime="2026-01-29 12:08:28.701953347 +0000 UTC m=+145.924895479" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.710013 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4s8m8" event={"ID":"7244f40b-2b72-48e2-bd02-5fdc718a460b","Type":"ContainerStarted","Data":"da7167f0e5cf2d0b404b41ca9e61b9bbac26681b32bcd3c8172414a24444d63b"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.711549 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" event={"ID":"d78c77cc-e724-4f09-b613-08983a1d2658","Type":"ContainerStarted","Data":"ecad1a6588dcbd28fd2df2a711b7c0129a72932c5897803b7e5a393e8591b2ea"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.720747 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" event={"ID":"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8","Type":"ContainerStarted","Data":"ef971a7899770bea42b59d8d501ef40b0c07efc5fa0a25dfdf7a4086b9fde529"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.736922 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" event={"ID":"b3064f75-3f20-425e-94fc-9b2db0147c1d","Type":"ContainerStarted","Data":"f64cbd2cab6f897fed149a17d6c601056874dadd6836eee8e99887e364511f11"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.749570 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.750793 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.250750369 +0000 UTC m=+146.473692601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.756888 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" event={"ID":"2a6abc12-af78-4433-84ae-8421ade4d80c","Type":"ContainerStarted","Data":"6b0f243b76e092081d7416edfb2c9b9504506471b90b05e816bff2a78061d812"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.757978 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.760082 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grt4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.760168 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.768569 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" event={"ID":"5d92dd16-4a3a-42f4-9260-241ab774a2ea","Type":"ContainerStarted","Data":"b682bf53d7f0ce7ba31c993c964df3259ec939102dc7812e55420b01da80a309"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.772451 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" event={"ID":"e853a192-370e-4deb-9668-671fd221b7fc","Type":"ContainerStarted","Data":"f3e2f613f1e992d5aceb1509c47ee27aa42b3f35a6beed7ec3455d0745636921"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.816162 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" podStartSLOduration=125.816138688 podStartE2EDuration="2m5.816138688s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.797918533 +0000 UTC m=+146.020860675" watchObservedRunningTime="2026-01-29 12:08:28.816138688 +0000 UTC m=+146.039080820" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.816594 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w5xjp" podStartSLOduration=126.816581371 podStartE2EDuration="2m6.816581371s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.757095415 +0000 UTC m=+145.980037547" watchObservedRunningTime="2026-01-29 12:08:28.816581371 +0000 UTC m=+146.039523513" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.816986 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvjqj" event={"ID":"b177214f-7d4c-4f4f-8741-3a2695d1c495","Type":"ContainerStarted","Data":"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.851589 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.853534 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.353517664 +0000 UTC m=+146.576459796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.882773 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" event={"ID":"486306e9-968b-4d73-932e-8f12efbf3204","Type":"ContainerStarted","Data":"e3e0d8b7a2abb723b10bae69224cba68b2413893a916e418d3fc734bcfc30a1a"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.885803 4660 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dhkss container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.885851 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" podUID="486306e9-968b-4d73-932e-8f12efbf3204" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.891726 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" podStartSLOduration=126.891673084 podStartE2EDuration="2m6.891673084s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.855132782 +0000 UTC m=+146.078074914" watchObservedRunningTime="2026-01-29 12:08:28.891673084 +0000 UTC m=+146.114615216" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.892206 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" podStartSLOduration=125.892200359 podStartE2EDuration="2m5.892200359s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.890684085 +0000 UTC m=+146.113626217" watchObservedRunningTime="2026-01-29 12:08:28.892200359 +0000 UTC m=+146.115142481" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.897389 4660 csr.go:261] certificate signing request csr-p8bm7 is approved, waiting to be issued Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.912523 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" event={"ID":"08bfd5dc-3708-4cc8-acf8-8fa2a91aef54","Type":"ContainerStarted","Data":"e7a5307e4394b849d917ee673ba234561dccef0affd7a6d827cb03f711f18747"} Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.932099 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c5c4w" podStartSLOduration=126.931646647 podStartE2EDuration="2m6.931646647s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.928113363 +0000 UTC m=+146.151055515" watchObservedRunningTime="2026-01-29 12:08:28.931646647 +0000 UTC m=+146.154588779" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.961977 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-54d55" podStartSLOduration=126.961952646 podStartE2EDuration="2m6.961952646s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:28.957810574 +0000 UTC m=+146.180752716" watchObservedRunningTime="2026-01-29 12:08:28.961952646 +0000 UTC m=+146.184894788" Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.966411 4660 csr.go:257] certificate signing request csr-p8bm7 is issued Jan 29 12:08:28 crc kubenswrapper[4660]: I0129 12:08:28.968224 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:28 crc kubenswrapper[4660]: E0129 12:08:28.970014 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.469995152 +0000 UTC m=+146.692937294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.003146 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" event={"ID":"48991717-d2c1-427b-8b52-dc549b2e87d9","Type":"ContainerStarted","Data":"36554be4312528c827d01b3334adecb6ae63d232b07984c223e782c8c9cc0e07"} Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.018440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" event={"ID":"1362cdda-d5a4-416f-8f65-6a631433d1ef","Type":"ContainerStarted","Data":"934a2c5d5a41d682305fe3565ebc635945957dc60bca8f82689c75ed72e0f002"} Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.018480 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" event={"ID":"1362cdda-d5a4-416f-8f65-6a631433d1ef","Type":"ContainerStarted","Data":"f9559e18cd381e4c35ea71cb959e65f8c8c09d30e4c226f9144116e8bc755040"} Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.043265 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" event={"ID":"4064e41f-ba1f-4b56-ac8b-2b50579d0953","Type":"ContainerStarted","Data":"067bcc8392ad36d7b9c38b2128a1f2d7ffcf1d15343fff876614adfa6428fa21"} Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.056358 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" event={"ID":"7c745995-ccb1-4229-b176-0e1af03d6c6d","Type":"ContainerStarted","Data":"c462d67c7408dfb2dea7e7eec2675055c7f8c26397c441466355b5237c322319"} Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.060757 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.060910 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.070112 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.071353 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.571332345 +0000 UTC m=+146.794274487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.079152 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.082832 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.150904 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-p5h6z" podStartSLOduration=126.150883969 podStartE2EDuration="2m6.150883969s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.004313149 +0000 UTC m=+146.227255281" watchObservedRunningTime="2026-01-29 12:08:29.150883969 +0000 UTC m=+146.373826101" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.173038 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.180093 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.680079815 +0000 UTC m=+146.903021947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.209467 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" podStartSLOduration=126.209447257 podStartE2EDuration="2m6.209447257s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.15193108 +0000 UTC m=+146.374873212" watchObservedRunningTime="2026-01-29 12:08:29.209447257 +0000 UTC m=+146.432389389" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.209903 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:29 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:29 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:29 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.209952 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.274957 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.275434 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.775418823 +0000 UTC m=+146.998360955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.291925 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" podStartSLOduration=127.291908386 podStartE2EDuration="2m7.291908386s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.22318726 +0000 UTC m=+146.446129392" watchObservedRunningTime="2026-01-29 12:08:29.291908386 +0000 UTC m=+146.514850518" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.376456 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.376756 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.876744445 +0000 UTC m=+147.099686577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.404201 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tvjqj" podStartSLOduration=127.40418328 podStartE2EDuration="2m7.40418328s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.298083428 +0000 UTC m=+146.521025560" watchObservedRunningTime="2026-01-29 12:08:29.40418328 +0000 UTC m=+146.627125412" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.405801 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xvk4l" podStartSLOduration=127.405791298 podStartE2EDuration="2m7.405791298s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.399784271 +0000 UTC m=+146.622726403" watchObservedRunningTime="2026-01-29 12:08:29.405791298 +0000 UTC m=+146.628733440" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.477793 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.478225 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:29.978195252 +0000 UTC m=+147.201137394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.576949 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lst8h" podStartSLOduration=126.576932928 podStartE2EDuration="2m6.576932928s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.576382432 +0000 UTC m=+146.799324564" watchObservedRunningTime="2026-01-29 12:08:29.576932928 +0000 UTC m=+146.799875070" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.579143 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.579472 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.079458092 +0000 UTC m=+147.302400224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.681718 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.682003 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.18198707 +0000 UTC m=+147.404929192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.742230 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" podStartSLOduration=126.742208537 podStartE2EDuration="2m6.742208537s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.709972941 +0000 UTC m=+146.932915073" watchObservedRunningTime="2026-01-29 12:08:29.742208537 +0000 UTC m=+146.965150669" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.783222 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.783759 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.283741225 +0000 UTC m=+147.506683357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.863573 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mcjsl" podStartSLOduration=126.863554367 podStartE2EDuration="2m6.863554367s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:29.863211987 +0000 UTC m=+147.086154119" watchObservedRunningTime="2026-01-29 12:08:29.863554367 +0000 UTC m=+147.086496499" Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.884801 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.884994 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.384964935 +0000 UTC m=+147.607907067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.885081 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.885431 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.385415469 +0000 UTC m=+147.608357601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.973959 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 12:03:28 +0000 UTC, rotation deadline is 2026-10-28 07:30:45.828538678 +0000 UTC Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.974000 4660 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6523h22m15.854541391s for next certificate rotation Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.986753 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:29 crc kubenswrapper[4660]: E0129 12:08:29.987071 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.4870518 +0000 UTC m=+147.709993932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:29 crc kubenswrapper[4660]: I0129 12:08:29.989708 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.057130 4660 patch_prober.go:28] interesting pod/console-operator-58897d9998-w8qs9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.057186 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" podUID="a749f255-61ea-4370-87fc-2d13276383b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.068864 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wtvmf" event={"ID":"e2fe5735-2f78-4963-8f44-8ed9ebc8549d","Type":"ContainerStarted","Data":"0b26ddbc1caea6dd2f33e3399c2a0408cf49eb7e53750babb9d18bbd700a45de"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.068903 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wtvmf" event={"ID":"e2fe5735-2f78-4963-8f44-8ed9ebc8549d","Type":"ContainerStarted","Data":"a6536b44a7242df1b0f42c5cbc6d306166244797c48e2fe137882155c308e211"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.069093 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.070561 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" event={"ID":"e818a9e1-a6ae-4d43-aca2-d9cad02e57ba","Type":"ContainerStarted","Data":"84dc4fe89879158903fb5c3a240a95aab72fabc92e56c2522a6270662bbd3ec7"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.073187 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mfb8p" event={"ID":"5d92dd16-4a3a-42f4-9260-241ab774a2ea","Type":"ContainerStarted","Data":"777b53aa2f312f4f7a53dc65756df4d7b86409bebe719dad1562120bd50b5328"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.075228 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" event={"ID":"9992b62a-1c70-4213-b515-009b40aa326e","Type":"ContainerStarted","Data":"944b485c46d561f56d0820a514f1bd7997f4471f5a8652d58737dde90a5c2e3a"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.078679 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" event={"ID":"4ab0e310-7e1b-404d-b763-b6813d39d49d","Type":"ContainerStarted","Data":"424b087eb41738af1497b97a25d64689871321ae396273d487151279aee0ecf7"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.081723 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" event={"ID":"ba274694-159c-4f63-9aff-54ba10d6f5ed","Type":"ContainerStarted","Data":"4f8d8a9df0f374971fe761b6a85de30673f2feaeaa72e6fa25172a5b31889373"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.082149 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.085026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" event={"ID":"6c466078-87ee-40ea-83ee-11aa309b065f","Type":"ContainerStarted","Data":"6b1f32677f09560e0e438c341a974a1dca810227a09bd92daa0c6c32b47c8ea9"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.085100 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" event={"ID":"6c466078-87ee-40ea-83ee-11aa309b065f","Type":"ContainerStarted","Data":"3a0cbc29fb1d78db61109d9ba4ef73244f185cab41fe091dd97dfd56280753c0"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.086795 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z87xh" event={"ID":"48991717-d2c1-427b-8b52-dc549b2e87d9","Type":"ContainerStarted","Data":"5058c06455278757f9312b5dc03d687293328e25dc0f682ba509a44b72b96455"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.087942 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.088286 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.58827359 +0000 UTC m=+147.811215722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.088823 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" event={"ID":"fc3dbe01-fdbc-4393-9a35-7f6244c4f385","Type":"ContainerStarted","Data":"7d0a1f65ec639cd21da4362665033c93986eea391fa89f922a248c6e4a475fdc"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.090880 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" event={"ID":"a5d2d1e9-c405-4de7-8a8f-18180c41f64f","Type":"ContainerStarted","Data":"86f7c23187d34fe862afa5ec7884db362bd8d3e5c6eaadc5de4e64917f295dae"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.092477 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-65ffv" event={"ID":"7c745995-ccb1-4229-b176-0e1af03d6c6d","Type":"ContainerStarted","Data":"6ffd4bf4f7ed6d930889cd9c32f62c470266c7bb6ba8db1e874bf0998c742b5a"} Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.092978 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grt4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.093011 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.095379 4660 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7c4dl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.095426 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" podUID="8d4807a1-3e06-4ec0-9e43-70d5c9755f61" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.106841 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dhkss" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.130859 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wtvmf" podStartSLOduration=9.130835939 podStartE2EDuration="9.130835939s" podCreationTimestamp="2026-01-29 12:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.127829351 +0000 UTC m=+147.350771473" watchObservedRunningTime="2026-01-29 12:08:30.130835939 +0000 UTC m=+147.353778071" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.162811 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" podStartSLOduration=127.162785776 podStartE2EDuration="2m7.162785776s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.152046841 +0000 UTC m=+147.374988983" watchObservedRunningTime="2026-01-29 12:08:30.162785776 +0000 UTC m=+147.385727918" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.189157 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.191054 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.691038075 +0000 UTC m=+147.913980217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.218877 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:30 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:30 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:30 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.218938 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.239379 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" podStartSLOduration=128.239365483 podStartE2EDuration="2m8.239365483s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.238140797 +0000 UTC m=+147.461082929" watchObservedRunningTime="2026-01-29 12:08:30.239365483 +0000 UTC m=+147.462307615" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.291539 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.291946 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.791930425 +0000 UTC m=+148.014872557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.318766 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4c2k4" podStartSLOduration=127.318750062 podStartE2EDuration="2m7.318750062s" podCreationTimestamp="2026-01-29 12:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.276677698 +0000 UTC m=+147.499619840" watchObservedRunningTime="2026-01-29 12:08:30.318750062 +0000 UTC m=+147.541692194" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.318876 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s6zhh" podStartSLOduration=128.318872766 podStartE2EDuration="2m8.318872766s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.318852305 +0000 UTC m=+147.541794437" watchObservedRunningTime="2026-01-29 12:08:30.318872766 +0000 UTC m=+147.541814888" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.392669 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.392922 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.892896738 +0000 UTC m=+148.115838870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.393280 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.393307 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.393332 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.394319 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.894302839 +0000 UTC m=+148.117244971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.408040 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.412138 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.495207 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.495409 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.495471 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.497842 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:30.997823346 +0000 UTC m=+148.220765478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.512539 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.514246 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.528249 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6jrtx" podStartSLOduration=128.528233398 podStartE2EDuration="2m8.528233398s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.524943172 +0000 UTC m=+147.747885314" watchObservedRunningTime="2026-01-29 12:08:30.528233398 +0000 UTC m=+147.751175530" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.596648 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.597248 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.097234693 +0000 UTC m=+148.320176825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.671786 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" podStartSLOduration=128.671754059 podStartE2EDuration="2m8.671754059s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:30.669913455 +0000 UTC m=+147.892855587" watchObservedRunningTime="2026-01-29 12:08:30.671754059 +0000 UTC m=+147.894696191" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.688065 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.697871 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.698426 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.198412011 +0000 UTC m=+148.421354143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.700478 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.706644 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.800410 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.800811 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.300790665 +0000 UTC m=+148.523732797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.900991 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.901182 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.40115297 +0000 UTC m=+148.624095102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:30 crc kubenswrapper[4660]: I0129 12:08:30.901244 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:30 crc kubenswrapper[4660]: E0129 12:08:30.901566 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.401553331 +0000 UTC m=+148.624495523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.004988 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.005852 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.505834671 +0000 UTC m=+148.728776803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.074284 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wz7n4" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.107922 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.108508 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.608496043 +0000 UTC m=+148.831438175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.145630 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grt4k container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.145674 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.188189 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7c4dl" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.211341 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:31 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:31 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:31 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.211608 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.230599 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.237338 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.737310422 +0000 UTC m=+148.960252554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.237396 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.248680 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.748661835 +0000 UTC m=+148.971603977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.290330 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.291301 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.297915 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.342155 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.344072 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.844045004 +0000 UTC m=+149.066987136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.445938 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.446198 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.446345 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n24\" (UniqueName: \"kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.446442 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.446854 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:31.946832119 +0000 UTC m=+149.169774261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.470045 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.487606 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.488830 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551175 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551226 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551438 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551496 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551542 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9n24\" (UniqueName: \"kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551571 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551619 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6tq\" (UniqueName: \"kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.551644 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.555829 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.055807637 +0000 UTC m=+149.278749809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.556185 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.573374 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.603306 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:08:31 crc kubenswrapper[4660]: W0129 12:08:31.641092 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-a7ae45d66fc94d60dd73cfd6f776584526dc4e7a75729e7a7f21789da32ebcc1 WatchSource:0}: Error finding container a7ae45d66fc94d60dd73cfd6f776584526dc4e7a75729e7a7f21789da32ebcc1: Status 404 returned error can't find the container with id a7ae45d66fc94d60dd73cfd6f776584526dc4e7a75729e7a7f21789da32ebcc1 Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.654380 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.654439 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.654466 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.654495 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6tq\" (UniqueName: \"kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.655199 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.655423 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.655725 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.155674717 +0000 UTC m=+149.378616849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.668981 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9n24\" (UniqueName: \"kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24\") pod \"certified-operators-rvv98\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.741155 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6tq\" (UniqueName: \"kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq\") pod \"community-operators-9x2dh\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.745994 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.756454 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.756736 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.256721812 +0000 UTC m=+149.479663944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.823715 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.847943 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.848989 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.857593 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.857924 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.35791158 +0000 UTC m=+149.580853712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.948058 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.963249 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.963461 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.963482 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfml4\" (UniqueName: \"kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:31 crc kubenswrapper[4660]: I0129 12:08:31.963507 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:31 crc kubenswrapper[4660]: E0129 12:08:31.963615 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.463600391 +0000 UTC m=+149.686542523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.065893 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.066237 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.066265 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfml4\" (UniqueName: \"kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.066299 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.066505 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.56648499 +0000 UTC m=+149.789427122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.067188 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.068147 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.149838 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.151324 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.159014 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" event={"ID":"a5d2d1e9-c405-4de7-8a8f-18180c41f64f","Type":"ContainerStarted","Data":"319c21f04b59457580fc88e0cd7821dea72eb9ee52a4459028bdb2e3d5eeada7"} Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.160920 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a7ae45d66fc94d60dd73cfd6f776584526dc4e7a75729e7a7f21789da32ebcc1"} Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.168229 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.168912 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.668893244 +0000 UTC m=+149.891835386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.203664 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.206182 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:32 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:32 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:32 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.206501 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.207176 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfml4\" (UniqueName: \"kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4\") pod \"community-operators-49j6q\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.270484 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.270521 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv294\" (UniqueName: \"kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.270551 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.270674 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.271774 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.771760542 +0000 UTC m=+149.994702664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: W0129 12:08:32.307828 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-68958bdc6db7e93e376e55003bf18ce6dfedaeba20d1fd04f38cde0dcc36bc63 WatchSource:0}: Error finding container 68958bdc6db7e93e376e55003bf18ce6dfedaeba20d1fd04f38cde0dcc36bc63: Status 404 returned error can't find the container with id 68958bdc6db7e93e376e55003bf18ce6dfedaeba20d1fd04f38cde0dcc36bc63 Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.372952 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.373136 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.873105766 +0000 UTC m=+150.096047898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.373472 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.373538 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.373563 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv294\" (UniqueName: \"kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.373587 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.374118 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.374398 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.374714 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.874679702 +0000 UTC m=+150.097621834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.425213 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv294\" (UniqueName: \"kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294\") pod \"certified-operators-bchtm\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.475172 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.475791 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:32.975759618 +0000 UTC m=+150.198701740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.477757 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.493910 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.581058 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.581401 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.081389397 +0000 UTC m=+150.304331529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.682031 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.682542 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.182526804 +0000 UTC m=+150.405468936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.783167 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.783474 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.283463345 +0000 UTC m=+150.506405477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.889133 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.890001 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.389984481 +0000 UTC m=+150.612926613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.949354 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:08:32 crc kubenswrapper[4660]: I0129 12:08:32.991218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:32 crc kubenswrapper[4660]: E0129 12:08:32.991561 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.49154424 +0000 UTC m=+150.714486372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.010373 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.036516 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.048197 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.050479 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.093990 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.094147 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.094186 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4b4m\" (UniqueName: \"kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.094208 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.094356 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.594341296 +0000 UTC m=+150.817283428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.167908 4660 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9z9h5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.167968 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" podUID="ba274694-159c-4f63-9aff-54ba10d6f5ed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.196292 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fe14b27abac4631d1e37c0657bad7e93c52a8a17818cecdf72464eaf3f9dd24f"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.196342 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9dbf3deede76c886628afcfc0aa06fe10b2ea05cc75e3c7e505e5f89abfc7c4e"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.197049 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.198051 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.198373 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4b4m\" (UniqueName: \"kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.198496 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.198647 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.198897 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.199233 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.699213662 +0000 UTC m=+150.922155794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.199512 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.212811 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:33 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:33 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:33 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.212869 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.216976 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228375 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" event={"ID":"a5d2d1e9-c405-4de7-8a8f-18180c41f64f","Type":"ContainerStarted","Data":"1998cb38bd14342045945f22287fb88f18f3b871c6bd394913d8d4df9c265418"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228426 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerStarted","Data":"aaa6914d386046aa926b2d1b9eaba80696935f953094d820d83681558dc4e306"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228445 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fbc6c8399735b6891ccca96c36af50741bdea152da7e8e25d970752a39ba27dc"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228456 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"68958bdc6db7e93e376e55003bf18ce6dfedaeba20d1fd04f38cde0dcc36bc63"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228734 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.228857 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.236285 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.236521 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.237840 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"28d09b75576e2bd14c6c25a18ed76ca30dc3e3756223d54175fd52cf961495aa"} Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.270261 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4b4m\" (UniqueName: \"kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m\") pod \"redhat-marketplace-qwwx4\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.305797 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.306116 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.306210 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.306819 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.806803359 +0000 UTC m=+151.029745491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.355835 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.378982 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.380199 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.388713 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.416586 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.416877 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.417000 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.417387 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:33.917370323 +0000 UTC m=+151.140312455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.417723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.438346 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.496631 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.519275 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.519771 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.519808 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.519909 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8lm\" (UniqueName: \"kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.522135 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.022105346 +0000 UTC m=+151.245047478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.620742 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.631166 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8lm\" (UniqueName: \"kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.631412 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.631527 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.631751 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.632150 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.132134984 +0000 UTC m=+151.355077116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.632797 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.632810 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.736212 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.736669 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.23664834 +0000 UTC m=+151.459590472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.740177 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8lm\" (UniqueName: \"kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm\") pod \"redhat-marketplace-x95ls\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.745586 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.842556 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.842917 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.342899708 +0000 UTC m=+151.565841840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.861897 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.862847 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.883920 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.953023 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:33 crc kubenswrapper[4660]: E0129 12:08:33.954262 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.454241824 +0000 UTC m=+151.677183956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:33 crc kubenswrapper[4660]: I0129 12:08:33.982703 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.011896 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.011944 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.012326 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.012349 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.034164 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-w8qs9" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.054512 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.055086 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.555069933 +0000 UTC m=+151.778012065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.155636 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.157111 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.657095946 +0000 UTC m=+151.880038078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.180266 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.180298 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.213044 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.218239 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:34 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:34 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:34 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.218298 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.258599 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.260182 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.76016831 +0000 UTC m=+151.983110442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.301605 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" event={"ID":"a5d2d1e9-c405-4de7-8a8f-18180c41f64f","Type":"ContainerStarted","Data":"c182d47020b5ddade2c9a5731ae4c2b1df091b2331455d7cf9b0a7448345f672"} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.312118 4660 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.313222 4660 generic.go:334] "Generic (PLEG): container finished" podID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerID="2fc059651739b5fe4bbc5a76110f43fbcdc8e4d41ed2acf149266abf4b1925e2" exitCode=0 Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.313276 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerDied","Data":"2fc059651739b5fe4bbc5a76110f43fbcdc8e4d41ed2acf149266abf4b1925e2"} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.313298 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerStarted","Data":"d97ec8765ff60273a93a3340222daf375388379eeb99d27ae89eb418adac4b51"} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.314293 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9z9h5" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.315338 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.319827 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerStarted","Data":"1ab6cae4eb18ffa5466ddfb084ee959cce6c81f712b587690ebfafa4ddb67edd"} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.330089 4660 generic.go:334] "Generic (PLEG): container finished" podID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerID="bda38277f6c53a0ee3348736b484b959407c3aefc108b07b416abe680ab1dfba" exitCode=0 Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.331192 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerDied","Data":"bda38277f6c53a0ee3348736b484b959407c3aefc108b07b416abe680ab1dfba"} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.350835 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bl69x" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.360383 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.360507 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.860486733 +0000 UTC m=+152.083428865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.361217 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.861208375 +0000 UTC m=+152.084150507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.360940 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.425853 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8j7l7" podStartSLOduration=13.425833481 podStartE2EDuration="13.425833481s" podCreationTimestamp="2026-01-29 12:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:34.42138728 +0000 UTC m=+151.644329432" watchObservedRunningTime="2026-01-29 12:08:34.425833481 +0000 UTC m=+151.648775613" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.469024 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.470565 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:34.970545272 +0000 UTC m=+152.193487404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.573262 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.573537 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:08:35.073524294 +0000 UTC m=+152.296466426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sn58d" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.579961 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.598330 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.622199 4660 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T12:08:34.312759443Z","Handler":null,"Name":""} Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.674566 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:34 crc kubenswrapper[4660]: E0129 12:08:34.675579 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:35.175558647 +0000 UTC m=+152.398500779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.713811 4660 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.713850 4660 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.776105 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.792191 4660 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.792249 4660 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.809294 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.809770 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerName="controller-manager" containerID="cri-o://bd69a11a32bff522470509bca7fe9109eba6f652e07fb349bd3dc8585ba42996" gracePeriod=30 Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.866549 4660 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9zrm5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]log ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]etcd ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/max-in-flight-filter ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 12:08:34 crc kubenswrapper[4660]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 12:08:34 crc kubenswrapper[4660]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-startinformers ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 12:08:34 crc kubenswrapper[4660]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 12:08:34 crc kubenswrapper[4660]: livez check failed Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.866590 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" podUID="6c466078-87ee-40ea-83ee-11aa309b065f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.950002 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:08:34 crc kubenswrapper[4660]: I0129 12:08:34.988105 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.006498 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.006597 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.008575 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.071827 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.072152 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.073426 4660 patch_prober.go:28] interesting pod/console-f9d7485db-tvjqj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.073503 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tvjqj" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.084536 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.084574 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xznfq\" (UniqueName: \"kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.084590 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.196001 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.196072 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xznfq\" (UniqueName: \"kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.196102 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.197592 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.197826 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.213314 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:35 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:35 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:35 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.213373 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.234603 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xznfq\" (UniqueName: \"kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq\") pod \"redhat-operators-nlxjk\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.299736 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sn58d\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.337777 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.342931 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.383080 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.385996 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.391754 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.398000 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.412214 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.442242 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.488018 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.495837 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerStarted","Data":"13c8d1efa6d7efa2fd977593b5bfdfb4c020c11c461cc8cdb83baf2fe6b73c4e"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.496194 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerStarted","Data":"99e23e23cb41491286b225adb5c1c1e5b1522499fb1ae58cf9e9f9dfff5866b2"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.499410 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpfm\" (UniqueName: \"kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.499602 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.499635 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.527735 4660 generic.go:334] "Generic (PLEG): container finished" podID="fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" containerID="ef971a7899770bea42b59d8d501ef40b0c07efc5fa0a25dfdf7a4086b9fde529" exitCode=0 Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.528024 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" event={"ID":"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8","Type":"ContainerDied","Data":"ef971a7899770bea42b59d8d501ef40b0c07efc5fa0a25dfdf7a4086b9fde529"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.552078 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.588684 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerID="2c3f3baec840fcbbd39b9d6d7a5f3475854d9ea03653076dd249f6fb271c56ea" exitCode=0 Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.589257 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerDied","Data":"2c3f3baec840fcbbd39b9d6d7a5f3475854d9ea03653076dd249f6fb271c56ea"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.589320 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerStarted","Data":"1afe7ff53a152e5aef206b0447f1a331c07609b802a7160d52cb252a018d2744"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.601539 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.601582 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.601655 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpfm\" (UniqueName: \"kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.603061 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.603413 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.624738 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpfm\" (UniqueName: \"kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm\") pod \"redhat-operators-7lgwk\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.630104 4660 generic.go:334] "Generic (PLEG): container finished" podID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerID="a8da9774864d0e9ea7a45b0298160b447d2f5f25437ced569edd919ede8e2a72" exitCode=0 Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.630246 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerDied","Data":"a8da9774864d0e9ea7a45b0298160b447d2f5f25437ced569edd919ede8e2a72"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.639873 4660 generic.go:334] "Generic (PLEG): container finished" podID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerID="bd69a11a32bff522470509bca7fe9109eba6f652e07fb349bd3dc8585ba42996" exitCode=0 Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.642401 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" event={"ID":"101df3f7-db56-4a33-a0ec-d513f6785dde","Type":"ContainerDied","Data":"bd69a11a32bff522470509bca7fe9109eba6f652e07fb349bd3dc8585ba42996"} Jan 29 12:08:35 crc kubenswrapper[4660]: I0129 12:08:35.766648 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.008716 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.114468 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles\") pod \"101df3f7-db56-4a33-a0ec-d513f6785dde\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.114915 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert\") pod \"101df3f7-db56-4a33-a0ec-d513f6785dde\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.114950 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config\") pod \"101df3f7-db56-4a33-a0ec-d513f6785dde\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.115006 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k66dr\" (UniqueName: \"kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr\") pod \"101df3f7-db56-4a33-a0ec-d513f6785dde\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.115097 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca\") pod \"101df3f7-db56-4a33-a0ec-d513f6785dde\" (UID: \"101df3f7-db56-4a33-a0ec-d513f6785dde\") " Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.115508 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "101df3f7-db56-4a33-a0ec-d513f6785dde" (UID: "101df3f7-db56-4a33-a0ec-d513f6785dde"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.115871 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca" (OuterVolumeSpecName: "client-ca") pod "101df3f7-db56-4a33-a0ec-d513f6785dde" (UID: "101df3f7-db56-4a33-a0ec-d513f6785dde"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.116675 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config" (OuterVolumeSpecName: "config") pod "101df3f7-db56-4a33-a0ec-d513f6785dde" (UID: "101df3f7-db56-4a33-a0ec-d513f6785dde"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.127325 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "101df3f7-db56-4a33-a0ec-d513f6785dde" (UID: "101df3f7-db56-4a33-a0ec-d513f6785dde"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.129614 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr" (OuterVolumeSpecName: "kube-api-access-k66dr") pod "101df3f7-db56-4a33-a0ec-d513f6785dde" (UID: "101df3f7-db56-4a33-a0ec-d513f6785dde"). InnerVolumeSpecName "kube-api-access-k66dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216100 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k66dr\" (UniqueName: \"kubernetes.io/projected/101df3f7-db56-4a33-a0ec-d513f6785dde-kube-api-access-k66dr\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216133 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216091 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:36 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:36 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:36 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216223 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216145 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216615 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101df3f7-db56-4a33-a0ec-d513f6785dde-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.216627 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101df3f7-db56-4a33-a0ec-d513f6785dde-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.233963 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.502376 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:08:36 crc kubenswrapper[4660]: E0129 12:08:36.502813 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerName="controller-manager" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.502826 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerName="controller-manager" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.502974 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" containerName="controller-manager" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.504190 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.504348 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.528072 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.571748 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.627366 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.627417 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.627490 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.627517 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.627544 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.654449 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerStarted","Data":"13a723ac3c0592037448ad19161c790af44f27cb30e5e472e1eb0ca1dae26db5"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.678507 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" event={"ID":"101df3f7-db56-4a33-a0ec-d513f6785dde","Type":"ContainerDied","Data":"6b5100f5876b622c9365484dc73b8ad90536004ec5d7206a666193ae7aaf4531"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.678572 4660 scope.go:117] "RemoveContainer" containerID="bd69a11a32bff522470509bca7fe9109eba6f652e07fb349bd3dc8585ba42996" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.678720 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccbzg" Jan 29 12:08:36 crc kubenswrapper[4660]: W0129 12:08:36.698031 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42a55618_51a2_4df8_b139_ed326fd6371f.slice/crio-2af1b7031b9ebdf1167d016d08ca41dd9bb2355e2017d1907b0030ea1dac3e53 WatchSource:0}: Error finding container 2af1b7031b9ebdf1167d016d08ca41dd9bb2355e2017d1907b0030ea1dac3e53: Status 404 returned error can't find the container with id 2af1b7031b9ebdf1167d016d08ca41dd9bb2355e2017d1907b0030ea1dac3e53 Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.703133 4660 generic.go:334] "Generic (PLEG): container finished" podID="2f46b165-b1bd-42ea-8704-adafba36b152" containerID="aefcc489a109904c3d7b7ac7960be6d71605a55cf9c50acdd72d369be779d73d" exitCode=0 Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.703235 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerDied","Data":"aefcc489a109904c3d7b7ac7960be6d71605a55cf9c50acdd72d369be779d73d"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.703259 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerStarted","Data":"f59b5db63d3ad6fb6714bf9266c3b1752fecbf78430781e4a9e1946c65c47858"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.717149 4660 generic.go:334] "Generic (PLEG): container finished" podID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerID="13c8d1efa6d7efa2fd977593b5bfdfb4c020c11c461cc8cdb83baf2fe6b73c4e" exitCode=0 Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.717232 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerDied","Data":"13c8d1efa6d7efa2fd977593b5bfdfb4c020c11c461cc8cdb83baf2fe6b73c4e"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.727649 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" event={"ID":"ef47e61f-9c90-4ccc-af09-58fcdb99b371","Type":"ContainerStarted","Data":"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.727714 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" event={"ID":"ef47e61f-9c90-4ccc-af09-58fcdb99b371","Type":"ContainerStarted","Data":"8741571506dc666b9d9e85e736689d206061bc889bd5b6b3e7a15b6e4c4eafb7"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.728456 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.733814 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.733861 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.733881 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.733933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.733950 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.735457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.738552 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.740647 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.751275 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccbzg"] Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.752497 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.775461 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.776100 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert\") pod \"controller-manager-879f6c89f-dm45b\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.883930 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58690bdc-c80f-4740-9604-9af4bf303576","Type":"ContainerStarted","Data":"1db5f50eb0a20531d8b2ed42137afb7eddbc27ed46a9ef796a401049f85be73e"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.884458 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58690bdc-c80f-4740-9604-9af4bf303576","Type":"ContainerStarted","Data":"a10791e8217754732e9272492226283c234115abc45321a5d2cc0c2152139a2e"} Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.885922 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" podStartSLOduration=134.885910337 podStartE2EDuration="2m14.885910337s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:36.786208882 +0000 UTC m=+154.009151024" watchObservedRunningTime="2026-01-29 12:08:36.885910337 +0000 UTC m=+154.108852469" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.887447 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.8874420020000002 podStartE2EDuration="3.887442002s" podCreationTimestamp="2026-01-29 12:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:36.864719455 +0000 UTC m=+154.087661597" watchObservedRunningTime="2026-01-29 12:08:36.887442002 +0000 UTC m=+154.110384134" Jan 29 12:08:36 crc kubenswrapper[4660]: I0129 12:08:36.891286 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.203346 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:37 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:37 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:37 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.203628 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.316207 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.512001 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101df3f7-db56-4a33-a0ec-d513f6785dde" path="/var/lib/kubelet/pods/101df3f7-db56-4a33-a0ec-d513f6785dde/volumes" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.514840 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.564355 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume\") pod \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.564435 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume\") pod \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.564470 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wcs\" (UniqueName: \"kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs\") pod \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\" (UID: \"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8\") " Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.565474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" (UID: "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.569936 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs" (OuterVolumeSpecName: "kube-api-access-t7wcs") pod "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" (UID: "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8"). InnerVolumeSpecName "kube-api-access-t7wcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.582782 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" (UID: "fd77e369-3bfb-4bd7-aca5-441b93b3a2c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.666421 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.666451 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.666755 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wcs\" (UniqueName: \"kubernetes.io/projected/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8-kube-api-access-t7wcs\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.858984 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" event={"ID":"5d608b56-31aa-4893-88d5-7f19283a6706","Type":"ContainerStarted","Data":"1a554a817ab88736a6e8a353bcb504fd0d3b59c4a725b5783d2923576075a197"} Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.873593 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerStarted","Data":"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64"} Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.886625 4660 generic.go:334] "Generic (PLEG): container finished" podID="58690bdc-c80f-4740-9604-9af4bf303576" containerID="1db5f50eb0a20531d8b2ed42137afb7eddbc27ed46a9ef796a401049f85be73e" exitCode=0 Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.886759 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58690bdc-c80f-4740-9604-9af4bf303576","Type":"ContainerDied","Data":"1db5f50eb0a20531d8b2ed42137afb7eddbc27ed46a9ef796a401049f85be73e"} Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.901479 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerStarted","Data":"2af1b7031b9ebdf1167d016d08ca41dd9bb2355e2017d1907b0030ea1dac3e53"} Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.911587 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" event={"ID":"fd77e369-3bfb-4bd7-aca5-441b93b3a2c8","Type":"ContainerDied","Data":"17583a1ee8972e5e9d681d075aa6fefa8462e32c7b0baa7e6b38f9e3ad0e37fa"} Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.911644 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17583a1ee8972e5e9d681d075aa6fefa8462e32c7b0baa7e6b38f9e3ad0e37fa" Jan 29 12:08:37 crc kubenswrapper[4660]: I0129 12:08:37.911602 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss" Jan 29 12:08:38 crc kubenswrapper[4660]: I0129 12:08:38.207118 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:38 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:38 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:38 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:38 crc kubenswrapper[4660]: I0129 12:08:38.207168 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:38 crc kubenswrapper[4660]: I0129 12:08:38.972447 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" event={"ID":"5d608b56-31aa-4893-88d5-7f19283a6706","Type":"ContainerStarted","Data":"b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509"} Jan 29 12:08:38 crc kubenswrapper[4660]: I0129 12:08:38.973709 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.004518 4660 generic.go:334] "Generic (PLEG): container finished" podID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerID="6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.004708 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerDied","Data":"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64"} Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.015335 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" podStartSLOduration=4.015297332 podStartE2EDuration="4.015297332s" podCreationTimestamp="2026-01-29 12:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:39.012770458 +0000 UTC m=+156.235712600" watchObservedRunningTime="2026-01-29 12:08:39.015297332 +0000 UTC m=+156.238239464" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.018088 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.032609 4660 generic.go:334] "Generic (PLEG): container finished" podID="42a55618-51a2-4df8-b139-ed326fd6371f" containerID="d788e0b9197f44bd930004b3d39a5c74c5c0205570a44226290177b98dc56638" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.032780 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerDied","Data":"d788e0b9197f44bd930004b3d39a5c74c5c0205570a44226290177b98dc56638"} Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.195460 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.208517 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:39 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:39 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:39 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.208592 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.214328 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9zrm5" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.832396 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.926604 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir\") pod \"58690bdc-c80f-4740-9604-9af4bf303576\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " Jan 29 12:08:39 crc kubenswrapper[4660]: I0129 12:08:39.927050 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "58690bdc-c80f-4740-9604-9af4bf303576" (UID: "58690bdc-c80f-4740-9604-9af4bf303576"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.029511 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access\") pod \"58690bdc-c80f-4740-9604-9af4bf303576\" (UID: \"58690bdc-c80f-4740-9604-9af4bf303576\") " Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.030081 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58690bdc-c80f-4740-9604-9af4bf303576-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.056898 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "58690bdc-c80f-4740-9604-9af4bf303576" (UID: "58690bdc-c80f-4740-9604-9af4bf303576"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.065152 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.072078 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"58690bdc-c80f-4740-9604-9af4bf303576","Type":"ContainerDied","Data":"a10791e8217754732e9272492226283c234115abc45321a5d2cc0c2152139a2e"} Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.072137 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10791e8217754732e9272492226283c234115abc45321a5d2cc0c2152139a2e" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.133024 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/58690bdc-c80f-4740-9604-9af4bf303576-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.207155 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:40 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:40 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:40 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.207212 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.246611 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wtvmf" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.391985 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:08:40 crc kubenswrapper[4660]: E0129 12:08:40.392191 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" containerName="collect-profiles" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.392203 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" containerName="collect-profiles" Jan 29 12:08:40 crc kubenswrapper[4660]: E0129 12:08:40.392213 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58690bdc-c80f-4740-9604-9af4bf303576" containerName="pruner" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.392219 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="58690bdc-c80f-4740-9604-9af4bf303576" containerName="pruner" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.392304 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="58690bdc-c80f-4740-9604-9af4bf303576" containerName="pruner" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.392318 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" containerName="collect-profiles" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.392651 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.399307 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.400367 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.408409 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.449301 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.449363 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.550629 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.550710 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.550838 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.586482 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:40 crc kubenswrapper[4660]: I0129 12:08:40.715828 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:41 crc kubenswrapper[4660]: I0129 12:08:41.202987 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:41 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:41 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:41 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:41 crc kubenswrapper[4660]: I0129 12:08:41.203927 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:41 crc kubenswrapper[4660]: I0129 12:08:41.575420 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:08:41 crc kubenswrapper[4660]: W0129 12:08:41.628652 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0b301b13_97a9_4525_8ba2_6bd5a636b043.slice/crio-e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de WatchSource:0}: Error finding container e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de: Status 404 returned error can't find the container with id e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de Jan 29 12:08:42 crc kubenswrapper[4660]: I0129 12:08:42.133001 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0b301b13-97a9-4525-8ba2-6bd5a636b043","Type":"ContainerStarted","Data":"e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de"} Jan 29 12:08:42 crc kubenswrapper[4660]: I0129 12:08:42.204543 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:42 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:42 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:42 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:42 crc kubenswrapper[4660]: I0129 12:08:42.204600 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:43 crc kubenswrapper[4660]: I0129 12:08:43.161424 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0b301b13-97a9-4525-8ba2-6bd5a636b043","Type":"ContainerStarted","Data":"299710a2ed45d98f745513f4e7629e71ee59922466e4f5bef46eb9dc9ffe2a65"} Jan 29 12:08:43 crc kubenswrapper[4660]: I0129 12:08:43.183795 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.183775201 podStartE2EDuration="3.183775201s" podCreationTimestamp="2026-01-29 12:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:43.180255768 +0000 UTC m=+160.403197900" watchObservedRunningTime="2026-01-29 12:08:43.183775201 +0000 UTC m=+160.406723163" Jan 29 12:08:43 crc kubenswrapper[4660]: I0129 12:08:43.205339 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:43 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:43 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:43 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:43 crc kubenswrapper[4660]: I0129 12:08:43.205402 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.012092 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.012104 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.012159 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.012162 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.194153 4660 generic.go:334] "Generic (PLEG): container finished" podID="0b301b13-97a9-4525-8ba2-6bd5a636b043" containerID="299710a2ed45d98f745513f4e7629e71ee59922466e4f5bef46eb9dc9ffe2a65" exitCode=0 Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.194230 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0b301b13-97a9-4525-8ba2-6bd5a636b043","Type":"ContainerDied","Data":"299710a2ed45d98f745513f4e7629e71ee59922466e4f5bef46eb9dc9ffe2a65"} Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.202628 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:44 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:44 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:44 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.202715 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.968294 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.981957 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37236252-cd23-4e04-8cf2-28b59af3e179-metrics-certs\") pod \"network-metrics-daemon-kj5hd\" (UID: \"37236252-cd23-4e04-8cf2-28b59af3e179\") " pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:44 crc kubenswrapper[4660]: I0129 12:08:44.985600 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kj5hd" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.078901 4660 patch_prober.go:28] interesting pod/console-f9d7485db-tvjqj container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.078957 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tvjqj" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.229053 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:45 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:45 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:45 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.229112 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.640096 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kj5hd"] Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.644910 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.801732 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir\") pod \"0b301b13-97a9-4525-8ba2-6bd5a636b043\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.801910 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access\") pod \"0b301b13-97a9-4525-8ba2-6bd5a636b043\" (UID: \"0b301b13-97a9-4525-8ba2-6bd5a636b043\") " Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.803187 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b301b13-97a9-4525-8ba2-6bd5a636b043" (UID: "0b301b13-97a9-4525-8ba2-6bd5a636b043"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.817538 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b301b13-97a9-4525-8ba2-6bd5a636b043" (UID: "0b301b13-97a9-4525-8ba2-6bd5a636b043"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.902933 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b301b13-97a9-4525-8ba2-6bd5a636b043-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:45 crc kubenswrapper[4660]: I0129 12:08:45.903027 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b301b13-97a9-4525-8ba2-6bd5a636b043-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.222556 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:46 crc kubenswrapper[4660]: [-]has-synced failed: reason withheld Jan 29 12:08:46 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:46 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.222668 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.315292 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" event={"ID":"37236252-cd23-4e04-8cf2-28b59af3e179","Type":"ContainerStarted","Data":"5a5de1dc4f8987793ffb1b730a2d0daa9213168fb85c1e09a9011feb01331a56"} Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.320390 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0b301b13-97a9-4525-8ba2-6bd5a636b043","Type":"ContainerDied","Data":"e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de"} Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.320424 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f68ceb8c8c0f0a1d3ec7a55b63e3c80903c193d2d3e4439517b5f3133033de" Jan 29 12:08:46 crc kubenswrapper[4660]: I0129 12:08:46.320532 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:08:47 crc kubenswrapper[4660]: I0129 12:08:47.205077 4660 patch_prober.go:28] interesting pod/router-default-5444994796-hbts9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:08:47 crc kubenswrapper[4660]: [+]has-synced ok Jan 29 12:08:47 crc kubenswrapper[4660]: [+]process-running ok Jan 29 12:08:47 crc kubenswrapper[4660]: healthz check failed Jan 29 12:08:47 crc kubenswrapper[4660]: I0129 12:08:47.205400 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hbts9" podUID="8beea9a8-e9d2-44df-9ee4-da38aa7a5ebd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:08:48 crc kubenswrapper[4660]: I0129 12:08:48.203131 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:48 crc kubenswrapper[4660]: I0129 12:08:48.205484 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hbts9" Jan 29 12:08:49 crc kubenswrapper[4660]: I0129 12:08:49.368440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" event={"ID":"37236252-cd23-4e04-8cf2-28b59af3e179","Type":"ContainerStarted","Data":"aeb841d2d3560c457b01cb66f58aac7282d6db90d1989ed60a065527068fd913"} Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.191677 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.191932 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" containerName="controller-manager" containerID="cri-o://b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509" gracePeriod=30 Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.204589 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.204838 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" containerID="cri-o://4af7d1db8479aee3e6ee0592f9d5600e2084e6a03d64b2bbcdc1681ed0534bf9" gracePeriod=30 Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.973588 4660 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wc4dc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 12:08:53 crc kubenswrapper[4660]: I0129 12:08:53.974007 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.009376 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.009431 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.009884 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.010083 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.010167 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.011031 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.011095 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.011270 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"d0f20488c07a411e01bcfd7cbb0eba8a3507c5ce378b32e8e016f8ee024b06fb"} pod="openshift-console/downloads-7954f5f757-grgf5" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 12:08:54 crc kubenswrapper[4660]: I0129 12:08:54.011451 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" containerID="cri-o://d0f20488c07a411e01bcfd7cbb0eba8a3507c5ce378b32e8e016f8ee024b06fb" gracePeriod=2 Jan 29 12:08:55 crc kubenswrapper[4660]: I0129 12:08:55.074545 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:55 crc kubenswrapper[4660]: I0129 12:08:55.078335 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:08:55 crc kubenswrapper[4660]: I0129 12:08:55.558320 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:08:56 crc kubenswrapper[4660]: I0129 12:08:56.269888 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:08:56 crc kubenswrapper[4660]: I0129 12:08:56.270276 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4660]: I0129 12:08:56.894219 4660 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dm45b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Jan 29 12:08:56 crc kubenswrapper[4660]: I0129 12:08:56.894289 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.470264 4660 generic.go:334] "Generic (PLEG): container finished" podID="5d163d7f-d49a-4487-9a62-a094182ac910" containerID="d0f20488c07a411e01bcfd7cbb0eba8a3507c5ce378b32e8e016f8ee024b06fb" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.472850 4660 generic.go:334] "Generic (PLEG): container finished" podID="5d608b56-31aa-4893-88d5-7f19283a6706" containerID="b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.475135 4660 generic.go:334] "Generic (PLEG): container finished" podID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerID="4af7d1db8479aee3e6ee0592f9d5600e2084e6a03d64b2bbcdc1681ed0534bf9" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.484678 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-grgf5" event={"ID":"5d163d7f-d49a-4487-9a62-a094182ac910","Type":"ContainerDied","Data":"d0f20488c07a411e01bcfd7cbb0eba8a3507c5ce378b32e8e016f8ee024b06fb"} Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.484787 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" event={"ID":"5d608b56-31aa-4893-88d5-7f19283a6706","Type":"ContainerDied","Data":"b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509"} Jan 29 12:08:57 crc kubenswrapper[4660]: I0129 12:08:57.484810 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" event={"ID":"c5c9df79-c602-4d20-9b74-c96f479d0f03","Type":"ContainerDied","Data":"4af7d1db8479aee3e6ee0592f9d5600e2084e6a03d64b2bbcdc1681ed0534bf9"} Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.923425 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.930246 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.951086 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:01 crc kubenswrapper[4660]: E0129 12:09:01.953020 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953052 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: E0129 12:09:01.953081 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b301b13-97a9-4525-8ba2-6bd5a636b043" containerName="pruner" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953089 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b301b13-97a9-4525-8ba2-6bd5a636b043" containerName="pruner" Jan 29 12:09:01 crc kubenswrapper[4660]: E0129 12:09:01.953118 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" containerName="controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953127 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" containerName="controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953531 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b301b13-97a9-4525-8ba2-6bd5a636b043" containerName="pruner" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953563 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" containerName="controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.953580 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" containerName="route-controller-manager" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.954499 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:01 crc kubenswrapper[4660]: I0129 12:09:01.999392 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081453 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert\") pod \"5d608b56-31aa-4893-88d5-7f19283a6706\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081495 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74\") pod \"5d608b56-31aa-4893-88d5-7f19283a6706\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081516 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config\") pod \"5d608b56-31aa-4893-88d5-7f19283a6706\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081549 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca\") pod \"c5c9df79-c602-4d20-9b74-c96f479d0f03\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081588 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config\") pod \"c5c9df79-c602-4d20-9b74-c96f479d0f03\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081612 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert\") pod \"c5c9df79-c602-4d20-9b74-c96f479d0f03\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081654 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles\") pod \"5d608b56-31aa-4893-88d5-7f19283a6706\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081672 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mzfr\" (UniqueName: \"kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr\") pod \"c5c9df79-c602-4d20-9b74-c96f479d0f03\" (UID: \"c5c9df79-c602-4d20-9b74-c96f479d0f03\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081771 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca\") pod \"5d608b56-31aa-4893-88d5-7f19283a6706\" (UID: \"5d608b56-31aa-4893-88d5-7f19283a6706\") " Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081949 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081975 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.081995 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5566c\" (UniqueName: \"kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.082036 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.082060 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.089993 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5c9df79-c602-4d20-9b74-c96f479d0f03" (UID: "c5c9df79-c602-4d20-9b74-c96f479d0f03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.090049 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d608b56-31aa-4893-88d5-7f19283a6706" (UID: "5d608b56-31aa-4893-88d5-7f19283a6706"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.090373 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d608b56-31aa-4893-88d5-7f19283a6706" (UID: "5d608b56-31aa-4893-88d5-7f19283a6706"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.090660 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config" (OuterVolumeSpecName: "config") pod "c5c9df79-c602-4d20-9b74-c96f479d0f03" (UID: "c5c9df79-c602-4d20-9b74-c96f479d0f03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.091006 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config" (OuterVolumeSpecName: "config") pod "5d608b56-31aa-4893-88d5-7f19283a6706" (UID: "5d608b56-31aa-4893-88d5-7f19283a6706"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.091816 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d608b56-31aa-4893-88d5-7f19283a6706" (UID: "5d608b56-31aa-4893-88d5-7f19283a6706"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.093540 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74" (OuterVolumeSpecName: "kube-api-access-jcf74") pod "5d608b56-31aa-4893-88d5-7f19283a6706" (UID: "5d608b56-31aa-4893-88d5-7f19283a6706"). InnerVolumeSpecName "kube-api-access-jcf74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.093705 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5c9df79-c602-4d20-9b74-c96f479d0f03" (UID: "c5c9df79-c602-4d20-9b74-c96f479d0f03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.094523 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr" (OuterVolumeSpecName: "kube-api-access-2mzfr") pod "c5c9df79-c602-4d20-9b74-c96f479d0f03" (UID: "c5c9df79-c602-4d20-9b74-c96f479d0f03"). InnerVolumeSpecName "kube-api-access-2mzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183644 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183751 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183782 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5566c\" (UniqueName: \"kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183808 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183832 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183922 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d608b56-31aa-4893-88d5-7f19283a6706-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183934 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/5d608b56-31aa-4893-88d5-7f19283a6706-kube-api-access-jcf74\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183943 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183951 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183959 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c9df79-c602-4d20-9b74-c96f479d0f03-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183967 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5c9df79-c602-4d20-9b74-c96f479d0f03-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183975 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183984 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mzfr\" (UniqueName: \"kubernetes.io/projected/c5c9df79-c602-4d20-9b74-c96f479d0f03-kube-api-access-2mzfr\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.183993 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d608b56-31aa-4893-88d5-7f19283a6706-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.184731 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.185625 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.186022 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.187348 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.201326 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5566c\" (UniqueName: \"kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c\") pod \"controller-manager-69c97c8649-swpc4\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.310259 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.511670 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" event={"ID":"c5c9df79-c602-4d20-9b74-c96f479d0f03","Type":"ContainerDied","Data":"bac7a5c64aa8b14c114aaa056f7adfd2ccf3d224ab4276b7b9053a4d7676d822"} Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.511738 4660 scope.go:117] "RemoveContainer" containerID="4af7d1db8479aee3e6ee0592f9d5600e2084e6a03d64b2bbcdc1681ed0534bf9" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.511869 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.519040 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" event={"ID":"5d608b56-31aa-4893-88d5-7f19283a6706","Type":"ContainerDied","Data":"1a554a817ab88736a6e8a353bcb504fd0d3b59c4a725b5783d2923576075a197"} Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.519138 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dm45b" Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.540326 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.545469 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wc4dc"] Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.555017 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:09:02 crc kubenswrapper[4660]: I0129 12:09:02.560846 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dm45b"] Jan 29 12:09:03 crc kubenswrapper[4660]: I0129 12:09:03.476324 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d608b56-31aa-4893-88d5-7f19283a6706" path="/var/lib/kubelet/pods/5d608b56-31aa-4893-88d5-7f19283a6706/volumes" Jan 29 12:09:03 crc kubenswrapper[4660]: I0129 12:09:03.477340 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c9df79-c602-4d20-9b74-c96f479d0f03" path="/var/lib/kubelet/pods/c5c9df79-c602-4d20-9b74-c96f479d0f03/volumes" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.010171 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.010226 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.275965 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qqwgw" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.534411 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.535322 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.537073 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.537330 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.538734 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.538776 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.539121 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.540613 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.544628 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.725197 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhpvt\" (UniqueName: \"kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.725262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.725314 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.725351 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.827223 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhpvt\" (UniqueName: \"kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.827523 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.827594 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.827665 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.828350 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.830852 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.844969 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.849454 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhpvt\" (UniqueName: \"kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt\") pod \"route-controller-manager-6878b88c94-w5zdv\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:04 crc kubenswrapper[4660]: I0129 12:09:04.870802 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:10 crc kubenswrapper[4660]: I0129 12:09:10.706386 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:09:12 crc kubenswrapper[4660]: E0129 12:09:12.114336 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 12:09:12 crc kubenswrapper[4660]: E0129 12:09:12.115419 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxpfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7lgwk_openshift-marketplace(42a55618-51a2-4df8-b139-ed326fd6371f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:12 crc kubenswrapper[4660]: E0129 12:09:12.117093 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7lgwk" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" Jan 29 12:09:13 crc kubenswrapper[4660]: I0129 12:09:13.175958 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:13 crc kubenswrapper[4660]: I0129 12:09:13.272579 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:13 crc kubenswrapper[4660]: E0129 12:09:13.988777 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7lgwk" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" Jan 29 12:09:14 crc kubenswrapper[4660]: I0129 12:09:14.009449 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:14 crc kubenswrapper[4660]: I0129 12:09:14.009499 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:14 crc kubenswrapper[4660]: E0129 12:09:14.106590 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 12:09:14 crc kubenswrapper[4660]: E0129 12:09:14.106999 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw6tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9x2dh_openshift-marketplace(d4b35a13-7853-4d23-9cd4-015b2d10d25a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:14 crc kubenswrapper[4660]: E0129 12:09:14.108426 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9x2dh" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.130903 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9x2dh" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" Jan 29 12:09:16 crc kubenswrapper[4660]: I0129 12:09:16.154345 4660 scope.go:117] "RemoveContainer" containerID="b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.164993 4660 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/controller-manager-879f6c89f-dm45b_openshift-controller-manager_controller-manager-b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509.log: no such file or directory" path="/var/log/containers/controller-manager-879f6c89f-dm45b_openshift-controller-manager_controller-manager-b84d0b23c7cd720ba7af33f5254a6c7a112b5b69369a555433a36563fb19e509.log" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.384264 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.384661 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4b4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qwwx4_openshift-marketplace(e02cd887-c6fb-48e5-9c08-23bb0bffd1ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.387231 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qwwx4" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" Jan 29 12:09:16 crc kubenswrapper[4660]: E0129 12:09:16.638051 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qwwx4" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" Jan 29 12:09:16 crc kubenswrapper[4660]: I0129 12:09:16.736576 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:16 crc kubenswrapper[4660]: W0129 12:09:16.743012 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacd13a2f_baca_4327_bc20_f7ce95020543.slice/crio-983f245673e17b4c83275caf41b6757e9c7bed91c3f9aeffa96a026a4f17e7a8 WatchSource:0}: Error finding container 983f245673e17b4c83275caf41b6757e9c7bed91c3f9aeffa96a026a4f17e7a8: Status 404 returned error can't find the container with id 983f245673e17b4c83275caf41b6757e9c7bed91c3f9aeffa96a026a4f17e7a8 Jan 29 12:09:16 crc kubenswrapper[4660]: I0129 12:09:16.808739 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:16 crc kubenswrapper[4660]: W0129 12:09:16.819835 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f897e6_b397_4065_9287_59e822d220bb.slice/crio-bf832f498e3b7d7ca08003c2501273429d79e42fe5c4e32643f850f50f75d4f6 WatchSource:0}: Error finding container bf832f498e3b7d7ca08003c2501273429d79e42fe5c4e32643f850f50f75d4f6: Status 404 returned error can't find the container with id bf832f498e3b7d7ca08003c2501273429d79e42fe5c4e32643f850f50f75d4f6 Jan 29 12:09:17 crc kubenswrapper[4660]: E0129 12:09:17.271622 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 12:09:17 crc kubenswrapper[4660]: E0129 12:09:17.272113 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th8lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x95ls_openshift-marketplace(2f46b165-b1bd-42ea-8704-adafba36b152): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:17 crc kubenswrapper[4660]: E0129 12:09:17.284403 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-x95ls" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.642463 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kj5hd" event={"ID":"37236252-cd23-4e04-8cf2-28b59af3e179","Type":"ContainerStarted","Data":"06dd367d34d4f78ddae0e98720cd677a8f2e5ae559b5279346f367c09f8d12cc"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.644954 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-grgf5" event={"ID":"5d163d7f-d49a-4487-9a62-a094182ac910","Type":"ContainerStarted","Data":"b9170041b14d3d5755d4814f732242b9b15db37bbbd844124c01811506f709a0"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.645197 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.645562 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.645621 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.646380 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" event={"ID":"20f897e6-b397-4065-9287-59e822d220bb","Type":"ContainerStarted","Data":"f76af4f7f7f3685426fde8f45678cc0eb86e3cae8d7c0c6fb36619171c3046b6"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.646415 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" event={"ID":"20f897e6-b397-4065-9287-59e822d220bb","Type":"ContainerStarted","Data":"bf832f498e3b7d7ca08003c2501273429d79e42fe5c4e32643f850f50f75d4f6"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.647929 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerStarted","Data":"920eaa381bf6d03ae4d025c8e8538d02bdda4b107d5cf2e9620e19f962e02a90"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.649060 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" event={"ID":"acd13a2f-baca-4327-bc20-f7ce95020543","Type":"ContainerStarted","Data":"983b5ab7fe18885af648f8386b29116cf7e42750c1a3f316bdba3f1aa259d0e9"} Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.649106 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" event={"ID":"acd13a2f-baca-4327-bc20-f7ce95020543","Type":"ContainerStarted","Data":"983f245673e17b4c83275caf41b6757e9c7bed91c3f9aeffa96a026a4f17e7a8"} Jan 29 12:09:17 crc kubenswrapper[4660]: E0129 12:09:17.650420 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x95ls" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.699061 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kj5hd" podStartSLOduration=175.69903625 podStartE2EDuration="2m55.69903625s" podCreationTimestamp="2026-01-29 12:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:17.669467792 +0000 UTC m=+194.892409924" watchObservedRunningTime="2026-01-29 12:09:17.69903625 +0000 UTC m=+194.921978382" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.983854 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.984890 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.987722 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 12:09:17 crc kubenswrapper[4660]: I0129 12:09:17.988275 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.006266 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.028636 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.028724 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: E0129 12:09:18.096628 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 12:09:18 crc kubenswrapper[4660]: E0129 12:09:18.096851 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xznfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nlxjk_openshift-marketplace(dccda9d5-1d0b-4ba3-a3e4-07234d4596ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:18 crc kubenswrapper[4660]: E0129 12:09:18.097979 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nlxjk" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.129856 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.129967 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.130048 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.151119 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.298429 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.661139 4660 generic.go:334] "Generic (PLEG): container finished" podID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerID="920eaa381bf6d03ae4d025c8e8538d02bdda4b107d5cf2e9620e19f962e02a90" exitCode=0 Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.661255 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerDied","Data":"920eaa381bf6d03ae4d025c8e8538d02bdda4b107d5cf2e9620e19f962e02a90"} Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.661971 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" podUID="acd13a2f-baca-4327-bc20-f7ce95020543" containerName="route-controller-manager" containerID="cri-o://983b5ab7fe18885af648f8386b29116cf7e42750c1a3f316bdba3f1aa259d0e9" gracePeriod=30 Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.662063 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.662037 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" podUID="20f897e6-b397-4065-9287-59e822d220bb" containerName="controller-manager" containerID="cri-o://f76af4f7f7f3685426fde8f45678cc0eb86e3cae8d7c0c6fb36619171c3046b6" gracePeriod=30 Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.662364 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.662395 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:18 crc kubenswrapper[4660]: E0129 12:09:18.664714 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nlxjk" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.675392 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.733991 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" podStartSLOduration=25.733968464 podStartE2EDuration="25.733968464s" podCreationTimestamp="2026-01-29 12:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:18.733358426 +0000 UTC m=+195.956300568" watchObservedRunningTime="2026-01-29 12:09:18.733968464 +0000 UTC m=+195.956910596" Jan 29 12:09:18 crc kubenswrapper[4660]: I0129 12:09:18.759361 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:09:19 crc kubenswrapper[4660]: E0129 12:09:19.203659 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 12:09:19 crc kubenswrapper[4660]: E0129 12:09:19.204041 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfml4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-49j6q_openshift-marketplace(d70ebf77-b129-4e52-85f6-f969b97e855e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:19 crc kubenswrapper[4660]: E0129 12:09:19.205222 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-49j6q" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.669933 4660 generic.go:334] "Generic (PLEG): container finished" podID="acd13a2f-baca-4327-bc20-f7ce95020543" containerID="983b5ab7fe18885af648f8386b29116cf7e42750c1a3f316bdba3f1aa259d0e9" exitCode=0 Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.670015 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" event={"ID":"acd13a2f-baca-4327-bc20-f7ce95020543","Type":"ContainerDied","Data":"983b5ab7fe18885af648f8386b29116cf7e42750c1a3f316bdba3f1aa259d0e9"} Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.672241 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"92ffe523-8507-4281-b64f-a27d0c2df483","Type":"ContainerStarted","Data":"943e125289c46c53be29f89d379a3995066c4a84b64f687fe9a6f2f453570e29"} Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.672287 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"92ffe523-8507-4281-b64f-a27d0c2df483","Type":"ContainerStarted","Data":"e9d8e0b1f9b9feb824543c43ca39404e2f59d7297121bd2dcfa14bc175e33de3"} Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.673743 4660 generic.go:334] "Generic (PLEG): container finished" podID="20f897e6-b397-4065-9287-59e822d220bb" containerID="f76af4f7f7f3685426fde8f45678cc0eb86e3cae8d7c0c6fb36619171c3046b6" exitCode=0 Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.673809 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" event={"ID":"20f897e6-b397-4065-9287-59e822d220bb","Type":"ContainerDied","Data":"f76af4f7f7f3685426fde8f45678cc0eb86e3cae8d7c0c6fb36619171c3046b6"} Jan 29 12:09:19 crc kubenswrapper[4660]: E0129 12:09:19.675207 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-49j6q" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" Jan 29 12:09:19 crc kubenswrapper[4660]: I0129 12:09:19.696478 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" podStartSLOduration=26.696454352 podStartE2EDuration="26.696454352s" podCreationTimestamp="2026-01-29 12:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:18.768065744 +0000 UTC m=+195.991007876" watchObservedRunningTime="2026-01-29 12:09:19.696454352 +0000 UTC m=+196.919396484" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.005893 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.011905 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.049343 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.049567 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f897e6-b397-4065-9287-59e822d220bb" containerName="controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.049580 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f897e6-b397-4065-9287-59e822d220bb" containerName="controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.049596 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd13a2f-baca-4327-bc20-f7ce95020543" containerName="route-controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.049602 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd13a2f-baca-4327-bc20-f7ce95020543" containerName="route-controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.049723 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd13a2f-baca-4327-bc20-f7ce95020543" containerName="route-controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.049739 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f897e6-b397-4065-9287-59e822d220bb" containerName="controller-manager" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.050275 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.072356 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.155908 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhpvt\" (UniqueName: \"kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt\") pod \"acd13a2f-baca-4327-bc20-f7ce95020543\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.155960 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config\") pod \"20f897e6-b397-4065-9287-59e822d220bb\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.155990 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca\") pod \"20f897e6-b397-4065-9287-59e822d220bb\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156023 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles\") pod \"20f897e6-b397-4065-9287-59e822d220bb\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156059 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca\") pod \"acd13a2f-baca-4327-bc20-f7ce95020543\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156081 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert\") pod \"20f897e6-b397-4065-9287-59e822d220bb\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156102 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert\") pod \"acd13a2f-baca-4327-bc20-f7ce95020543\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156148 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config\") pod \"acd13a2f-baca-4327-bc20-f7ce95020543\" (UID: \"acd13a2f-baca-4327-bc20-f7ce95020543\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156797 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca" (OuterVolumeSpecName: "client-ca") pod "acd13a2f-baca-4327-bc20-f7ce95020543" (UID: "acd13a2f-baca-4327-bc20-f7ce95020543"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156825 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "20f897e6-b397-4065-9287-59e822d220bb" (UID: "20f897e6-b397-4065-9287-59e822d220bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.156952 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config" (OuterVolumeSpecName: "config") pod "20f897e6-b397-4065-9287-59e822d220bb" (UID: "20f897e6-b397-4065-9287-59e822d220bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157035 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5566c\" (UniqueName: \"kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c\") pod \"20f897e6-b397-4065-9287-59e822d220bb\" (UID: \"20f897e6-b397-4065-9287-59e822d220bb\") " Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157171 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157177 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config" (OuterVolumeSpecName: "config") pod "acd13a2f-baca-4327-bc20-f7ce95020543" (UID: "acd13a2f-baca-4327-bc20-f7ce95020543"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157204 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157245 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlttc\" (UniqueName: \"kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157325 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157364 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157401 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157414 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157425 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acd13a2f-baca-4327-bc20-f7ce95020543-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157435 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.157450 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "20f897e6-b397-4065-9287-59e822d220bb" (UID: "20f897e6-b397-4065-9287-59e822d220bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.161889 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20f897e6-b397-4065-9287-59e822d220bb" (UID: "20f897e6-b397-4065-9287-59e822d220bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.161969 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt" (OuterVolumeSpecName: "kube-api-access-jhpvt") pod "acd13a2f-baca-4327-bc20-f7ce95020543" (UID: "acd13a2f-baca-4327-bc20-f7ce95020543"). InnerVolumeSpecName "kube-api-access-jhpvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.167958 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acd13a2f-baca-4327-bc20-f7ce95020543" (UID: "acd13a2f-baca-4327-bc20-f7ce95020543"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.180912 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c" (OuterVolumeSpecName: "kube-api-access-5566c") pod "20f897e6-b397-4065-9287-59e822d220bb" (UID: "20f897e6-b397-4065-9287-59e822d220bb"). InnerVolumeSpecName "kube-api-access-5566c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.258421 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.258848 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.258882 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.258913 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.258952 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlttc\" (UniqueName: \"kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259018 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acd13a2f-baca-4327-bc20-f7ce95020543-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259213 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5566c\" (UniqueName: \"kubernetes.io/projected/20f897e6-b397-4065-9287-59e822d220bb-kube-api-access-5566c\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259396 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhpvt\" (UniqueName: \"kubernetes.io/projected/acd13a2f-baca-4327-bc20-f7ce95020543-kube-api-access-jhpvt\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259426 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f897e6-b397-4065-9287-59e822d220bb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259431 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.259443 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f897e6-b397-4065-9287-59e822d220bb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.260019 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.260230 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.262190 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.277731 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlttc\" (UniqueName: \"kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc\") pod \"controller-manager-6ccbd686fb-2klcl\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.352243 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.352369 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv294,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bchtm_openshift-marketplace(6ff71e85-46e1-4ae6-a9bc-d721e1d5248c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.354238 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bchtm" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.388029 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.588285 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.680052 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" event={"ID":"53cc53e5-d343-4847-b168-17099269a92c","Type":"ContainerStarted","Data":"6ab1e3c2a14849e76a885b4b645989f6a21fc226654de347b04150112b713e37"} Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.683463 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.683796 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69c97c8649-swpc4" event={"ID":"20f897e6-b397-4065-9287-59e822d220bb","Type":"ContainerDied","Data":"bf832f498e3b7d7ca08003c2501273429d79e42fe5c4e32643f850f50f75d4f6"} Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.683871 4660 scope.go:117] "RemoveContainer" containerID="f76af4f7f7f3685426fde8f45678cc0eb86e3cae8d7c0c6fb36619171c3046b6" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.724098 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.724161 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv" event={"ID":"acd13a2f-baca-4327-bc20-f7ce95020543","Type":"ContainerDied","Data":"983f245673e17b4c83275caf41b6757e9c7bed91c3f9aeffa96a026a4f17e7a8"} Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.731061 4660 generic.go:334] "Generic (PLEG): container finished" podID="92ffe523-8507-4281-b64f-a27d0c2df483" containerID="943e125289c46c53be29f89d379a3995066c4a84b64f687fe9a6f2f453570e29" exitCode=0 Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.731174 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"92ffe523-8507-4281-b64f-a27d0c2df483","Type":"ContainerDied","Data":"943e125289c46c53be29f89d379a3995066c4a84b64f687fe9a6f2f453570e29"} Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.743716 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.746253 4660 scope.go:117] "RemoveContainer" containerID="983b5ab7fe18885af648f8386b29116cf7e42750c1a3f316bdba3f1aa259d0e9" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.747545 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69c97c8649-swpc4"] Jan 29 12:09:20 crc kubenswrapper[4660]: E0129 12:09:20.748630 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bchtm" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.783480 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:20 crc kubenswrapper[4660]: I0129 12:09:20.809614 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6878b88c94-w5zdv"] Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.477604 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f897e6-b397-4065-9287-59e822d220bb" path="/var/lib/kubelet/pods/20f897e6-b397-4065-9287-59e822d220bb/volumes" Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.478228 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd13a2f-baca-4327-bc20-f7ce95020543" path="/var/lib/kubelet/pods/acd13a2f-baca-4327-bc20-f7ce95020543/volumes" Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.741654 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" event={"ID":"53cc53e5-d343-4847-b168-17099269a92c","Type":"ContainerStarted","Data":"3a423477d0939addb1c153dd115530e72c7729e0524041bd4b77c3b527f920b0"} Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.743018 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.749872 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:21 crc kubenswrapper[4660]: I0129 12:09:21.761198 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" podStartSLOduration=8.761180313 podStartE2EDuration="8.761180313s" podCreationTimestamp="2026-01-29 12:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:21.75998995 +0000 UTC m=+198.982932112" watchObservedRunningTime="2026-01-29 12:09:21.761180313 +0000 UTC m=+198.984122435" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.098285 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.195999 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir\") pod \"92ffe523-8507-4281-b64f-a27d0c2df483\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.196060 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access\") pod \"92ffe523-8507-4281-b64f-a27d0c2df483\" (UID: \"92ffe523-8507-4281-b64f-a27d0c2df483\") " Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.196143 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92ffe523-8507-4281-b64f-a27d0c2df483" (UID: "92ffe523-8507-4281-b64f-a27d0c2df483"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.196312 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92ffe523-8507-4281-b64f-a27d0c2df483-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.212966 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92ffe523-8507-4281-b64f-a27d0c2df483" (UID: "92ffe523-8507-4281-b64f-a27d0c2df483"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.297219 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92ffe523-8507-4281-b64f-a27d0c2df483-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.544369 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.544953 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ffe523-8507-4281-b64f-a27d0c2df483" containerName="pruner" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.544968 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ffe523-8507-4281-b64f-a27d0c2df483" containerName="pruner" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.545055 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ffe523-8507-4281-b64f-a27d0c2df483" containerName="pruner" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.545435 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547527 4660 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547547 4660 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547559 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547566 4660 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547568 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547586 4660 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547608 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547609 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547529 4660 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547640 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: W0129 12:09:22.547527 4660 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 29 12:09:22 crc kubenswrapper[4660]: E0129 12:09:22.547664 4660 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.552984 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.701339 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnd9w\" (UniqueName: \"kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.701386 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.701415 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.701448 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.748266 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerStarted","Data":"88fd49af8fac05bc9a1028b98e7537cf63eefc3c94fcbe1f93cc13ce55ad72b3"} Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.751102 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.751100 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"92ffe523-8507-4281-b64f-a27d0c2df483","Type":"ContainerDied","Data":"e9d8e0b1f9b9feb824543c43ca39404e2f59d7297121bd2dcfa14bc175e33de3"} Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.751149 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d8e0b1f9b9feb824543c43ca39404e2f59d7297121bd2dcfa14bc175e33de3" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.764015 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvv98" podStartSLOduration=4.274051836 podStartE2EDuration="51.76399316s" podCreationTimestamp="2026-01-29 12:08:31 +0000 UTC" firstStartedPulling="2026-01-29 12:08:34.332710668 +0000 UTC m=+151.555652800" lastFinishedPulling="2026-01-29 12:09:21.822651992 +0000 UTC m=+199.045594124" observedRunningTime="2026-01-29 12:09:22.761926441 +0000 UTC m=+199.984868593" watchObservedRunningTime="2026-01-29 12:09:22.76399316 +0000 UTC m=+199.986935302" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.783884 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.784671 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.787153 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.787400 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.791628 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.802830 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnd9w\" (UniqueName: \"kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.802883 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.802911 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.802945 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.904506 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.904775 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:22 crc kubenswrapper[4660]: I0129 12:09:22.904816 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.006957 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.007031 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.007096 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.007385 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.007828 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.026400 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access\") pod \"installer-9-crc\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.098522 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.540972 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:09:23 crc kubenswrapper[4660]: W0129 12:09:23.554830 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podac6d365e_6112_4542_9b4f_5f5ac1227bb4.slice/crio-ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4 WatchSource:0}: Error finding container ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4: Status 404 returned error can't find the container with id ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4 Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.648061 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.762388 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac6d365e-6112-4542-9b4f-5f5ac1227bb4","Type":"ContainerStarted","Data":"ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4"} Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803260 4660 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803705 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert podName:c049f95e-1766-4708-96b9-1fac57ff03cb nodeName:}" failed. No retries permitted until 2026-01-29 12:09:24.303668658 +0000 UTC m=+201.526610790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert") pod "route-controller-manager-f7cf58869-j4gct" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803786 4660 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803812 4660 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803835 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config podName:c049f95e-1766-4708-96b9-1fac57ff03cb nodeName:}" failed. No retries permitted until 2026-01-29 12:09:24.303822272 +0000 UTC m=+201.526764404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config") pod "route-controller-manager-f7cf58869-j4gct" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: E0129 12:09:23.803888 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca podName:c049f95e-1766-4708-96b9-1fac57ff03cb nodeName:}" failed. No retries permitted until 2026-01-29 12:09:24.303869014 +0000 UTC m=+201.526811146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca") pod "route-controller-manager-f7cf58869-j4gct" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.858901 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.942488 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:23 crc kubenswrapper[4660]: I0129 12:09:23.953178 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnd9w\" (UniqueName: \"kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.009521 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.009586 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.009618 4660 patch_prober.go:28] interesting pod/downloads-7954f5f757-grgf5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.009669 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-grgf5" podUID="5d163d7f-d49a-4487-9a62-a094182ac910" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.074838 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.139240 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.141511 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.335727 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.335781 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.335829 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.336910 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.337035 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.352919 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") pod \"route-controller-manager-f7cf58869-j4gct\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.367543 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.684854 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.768929 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" event={"ID":"c049f95e-1766-4708-96b9-1fac57ff03cb","Type":"ContainerStarted","Data":"fa92fbb4a26a3eef279e03d87abe47946e8923e80e37739a023b444fc1c3dc64"} Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.769847 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac6d365e-6112-4542-9b4f-5f5ac1227bb4","Type":"ContainerStarted","Data":"57c91eeb2c6becb60fe65a233537c09a6deb02f1e4f2586cd691df2cd73991d9"} Jan 29 12:09:24 crc kubenswrapper[4660]: I0129 12:09:24.785101 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.785085909 podStartE2EDuration="2.785085909s" podCreationTimestamp="2026-01-29 12:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:24.78300364 +0000 UTC m=+202.005945772" watchObservedRunningTime="2026-01-29 12:09:24.785085909 +0000 UTC m=+202.008028041" Jan 29 12:09:25 crc kubenswrapper[4660]: I0129 12:09:25.775979 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" event={"ID":"c049f95e-1766-4708-96b9-1fac57ff03cb","Type":"ContainerStarted","Data":"1f2103d6d1c63d3bd6a6d51dfc0180a2445b30a2cbc50bf59486a33850e84811"} Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.270052 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.270133 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.270209 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.270636 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.270706 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d" gracePeriod=600 Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.783012 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d" exitCode=0 Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.783074 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d"} Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.783844 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.789276 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:26 crc kubenswrapper[4660]: I0129 12:09:26.807105 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" podStartSLOduration=13.807088245 podStartE2EDuration="13.807088245s" podCreationTimestamp="2026-01-29 12:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:25.806551123 +0000 UTC m=+203.029493265" watchObservedRunningTime="2026-01-29 12:09:26.807088245 +0000 UTC m=+204.030030377" Jan 29 12:09:28 crc kubenswrapper[4660]: I0129 12:09:28.795619 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc"} Jan 29 12:09:30 crc kubenswrapper[4660]: I0129 12:09:30.807971 4660 generic.go:334] "Generic (PLEG): container finished" podID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerID="1aa753813f78692639aa9b672d2a4425011c71a8bba2a365f7dcf2a330bee17e" exitCode=0 Jan 29 12:09:30 crc kubenswrapper[4660]: I0129 12:09:30.808042 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerDied","Data":"1aa753813f78692639aa9b672d2a4425011c71a8bba2a365f7dcf2a330bee17e"} Jan 29 12:09:30 crc kubenswrapper[4660]: I0129 12:09:30.811664 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerStarted","Data":"3e99974af0afc770aac40869bd81bd0d8cea24cb838f73c30f23a440a54c4276"} Jan 29 12:09:31 crc kubenswrapper[4660]: I0129 12:09:31.747279 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:31 crc kubenswrapper[4660]: I0129 12:09:31.749203 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:31 crc kubenswrapper[4660]: I0129 12:09:31.818592 4660 generic.go:334] "Generic (PLEG): container finished" podID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerID="3e99974af0afc770aac40869bd81bd0d8cea24cb838f73c30f23a440a54c4276" exitCode=0 Jan 29 12:09:31 crc kubenswrapper[4660]: I0129 12:09:31.818642 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerDied","Data":"3e99974af0afc770aac40869bd81bd0d8cea24cb838f73c30f23a440a54c4276"} Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.183890 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.242580 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.825884 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerStarted","Data":"417c38450471ca91e91be6ca0fca649416566606ad81b8ad33f1be65c20e12b2"} Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.828896 4660 generic.go:334] "Generic (PLEG): container finished" podID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerID="351453542b518a53669db4fbd737dfa34afa77ef3fba903b1c0e999d250ffeaf" exitCode=0 Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.828962 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerDied","Data":"351453542b518a53669db4fbd737dfa34afa77ef3fba903b1c0e999d250ffeaf"} Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.839142 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerStarted","Data":"6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8"} Jan 29 12:09:32 crc kubenswrapper[4660]: I0129 12:09:32.892632 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwwx4" podStartSLOduration=4.250347878 podStartE2EDuration="1m0.892611333s" podCreationTimestamp="2026-01-29 12:08:32 +0000 UTC" firstStartedPulling="2026-01-29 12:08:35.507398033 +0000 UTC m=+152.730340165" lastFinishedPulling="2026-01-29 12:09:32.149661488 +0000 UTC m=+209.372603620" observedRunningTime="2026-01-29 12:09:32.890467143 +0000 UTC m=+210.113409275" watchObservedRunningTime="2026-01-29 12:09:32.892611333 +0000 UTC m=+210.115553465" Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.207754 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.208011 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" podUID="53cc53e5-d343-4847-b168-17099269a92c" containerName="controller-manager" containerID="cri-o://3a423477d0939addb1c153dd115530e72c7729e0524041bd4b77c3b527f920b0" gracePeriod=30 Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.246775 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.247026 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerName="route-controller-manager" containerID="cri-o://1f2103d6d1c63d3bd6a6d51dfc0180a2445b30a2cbc50bf59486a33850e84811" gracePeriod=30 Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.390062 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.390113 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.845721 4660 generic.go:334] "Generic (PLEG): container finished" podID="42a55618-51a2-4df8-b139-ed326fd6371f" containerID="417c38450471ca91e91be6ca0fca649416566606ad81b8ad33f1be65c20e12b2" exitCode=0 Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.845782 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerDied","Data":"417c38450471ca91e91be6ca0fca649416566606ad81b8ad33f1be65c20e12b2"} Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.848451 4660 generic.go:334] "Generic (PLEG): container finished" podID="53cc53e5-d343-4847-b168-17099269a92c" containerID="3a423477d0939addb1c153dd115530e72c7729e0524041bd4b77c3b527f920b0" exitCode=0 Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.848536 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" event={"ID":"53cc53e5-d343-4847-b168-17099269a92c","Type":"ContainerDied","Data":"3a423477d0939addb1c153dd115530e72c7729e0524041bd4b77c3b527f920b0"} Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.850048 4660 generic.go:334] "Generic (PLEG): container finished" podID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerID="1f2103d6d1c63d3bd6a6d51dfc0180a2445b30a2cbc50bf59486a33850e84811" exitCode=0 Jan 29 12:09:33 crc kubenswrapper[4660]: I0129 12:09:33.850559 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" event={"ID":"c049f95e-1766-4708-96b9-1fac57ff03cb","Type":"ContainerDied","Data":"1f2103d6d1c63d3bd6a6d51dfc0180a2445b30a2cbc50bf59486a33850e84811"} Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.015677 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-grgf5" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.403589 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.409201 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.428754 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:34 crc kubenswrapper[4660]: E0129 12:09:34.428966 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cc53e5-d343-4847-b168-17099269a92c" containerName="controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.428978 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cc53e5-d343-4847-b168-17099269a92c" containerName="controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: E0129 12:09:34.428992 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerName="route-controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.428998 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerName="route-controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.429100 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerName="route-controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.429109 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cc53e5-d343-4847-b168-17099269a92c" containerName="controller-manager" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.429461 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.438027 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qwwx4" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" probeResult="failure" output=< Jan 29 12:09:34 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Jan 29 12:09:34 crc kubenswrapper[4660]: > Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448641 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnd9w\" (UniqueName: \"kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w\") pod \"c049f95e-1766-4708-96b9-1fac57ff03cb\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448710 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config\") pod \"53cc53e5-d343-4847-b168-17099269a92c\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448735 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") pod \"c049f95e-1766-4708-96b9-1fac57ff03cb\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448762 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles\") pod \"53cc53e5-d343-4847-b168-17099269a92c\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448798 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") pod \"c049f95e-1766-4708-96b9-1fac57ff03cb\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448826 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") pod \"c049f95e-1766-4708-96b9-1fac57ff03cb\" (UID: \"c049f95e-1766-4708-96b9-1fac57ff03cb\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448843 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlttc\" (UniqueName: \"kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc\") pod \"53cc53e5-d343-4847-b168-17099269a92c\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448860 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca\") pod \"53cc53e5-d343-4847-b168-17099269a92c\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448883 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert\") pod \"53cc53e5-d343-4847-b168-17099269a92c\" (UID: \"53cc53e5-d343-4847-b168-17099269a92c\") " Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.448973 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.449006 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.449029 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74852\" (UniqueName: \"kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.449068 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.455304 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca" (OuterVolumeSpecName: "client-ca") pod "53cc53e5-d343-4847-b168-17099269a92c" (UID: "53cc53e5-d343-4847-b168-17099269a92c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.455549 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "c049f95e-1766-4708-96b9-1fac57ff03cb" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.455703 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config" (OuterVolumeSpecName: "config") pod "53cc53e5-d343-4847-b168-17099269a92c" (UID: "53cc53e5-d343-4847-b168-17099269a92c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.456205 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config" (OuterVolumeSpecName: "config") pod "c049f95e-1766-4708-96b9-1fac57ff03cb" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.460983 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w" (OuterVolumeSpecName: "kube-api-access-gnd9w") pod "c049f95e-1766-4708-96b9-1fac57ff03cb" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb"). InnerVolumeSpecName "kube-api-access-gnd9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.461255 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53cc53e5-d343-4847-b168-17099269a92c" (UID: "53cc53e5-d343-4847-b168-17099269a92c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.462887 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.471873 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c049f95e-1766-4708-96b9-1fac57ff03cb" (UID: "c049f95e-1766-4708-96b9-1fac57ff03cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.472648 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc" (OuterVolumeSpecName: "kube-api-access-hlttc") pod "53cc53e5-d343-4847-b168-17099269a92c" (UID: "53cc53e5-d343-4847-b168-17099269a92c"). InnerVolumeSpecName "kube-api-access-hlttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.473242 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53cc53e5-d343-4847-b168-17099269a92c" (UID: "53cc53e5-d343-4847-b168-17099269a92c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.549661 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.549731 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74852\" (UniqueName: \"kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.550725 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.550839 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551095 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551161 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551198 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049f95e-1766-4708-96b9-1fac57ff03cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551212 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551224 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551235 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049f95e-1766-4708-96b9-1fac57ff03cb-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551247 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlttc\" (UniqueName: \"kubernetes.io/projected/53cc53e5-d343-4847-b168-17099269a92c-kube-api-access-hlttc\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551258 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cc53e5-d343-4847-b168-17099269a92c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551268 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cc53e5-d343-4847-b168-17099269a92c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551278 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnd9w\" (UniqueName: \"kubernetes.io/projected/c049f95e-1766-4708-96b9-1fac57ff03cb-kube-api-access-gnd9w\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.551819 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.554358 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.581447 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74852\" (UniqueName: \"kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852\") pod \"route-controller-manager-6bdff7df7b-p4z7c\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.762021 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.857258 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.857234 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" event={"ID":"c049f95e-1766-4708-96b9-1fac57ff03cb","Type":"ContainerDied","Data":"fa92fbb4a26a3eef279e03d87abe47946e8923e80e37739a023b444fc1c3dc64"} Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.857419 4660 scope.go:117] "RemoveContainer" containerID="1f2103d6d1c63d3bd6a6d51dfc0180a2445b30a2cbc50bf59486a33850e84811" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.868802 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" event={"ID":"53cc53e5-d343-4847-b168-17099269a92c","Type":"ContainerDied","Data":"6ab1e3c2a14849e76a885b4b645989f6a21fc226654de347b04150112b713e37"} Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.868927 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ccbd686fb-2klcl" Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.894210 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.901965 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct"] Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.916357 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:34 crc kubenswrapper[4660]: I0129 12:09:34.919947 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ccbd686fb-2klcl"] Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.126054 4660 scope.go:117] "RemoveContainer" containerID="3a423477d0939addb1c153dd115530e72c7729e0524041bd4b77c3b527f920b0" Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.368645 4660 patch_prober.go:28] interesting pod/route-controller-manager-f7cf58869-j4gct container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.368736 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f7cf58869-j4gct" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.478712 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cc53e5-d343-4847-b168-17099269a92c" path="/var/lib/kubelet/pods/53cc53e5-d343-4847-b168-17099269a92c/volumes" Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.479286 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c049f95e-1766-4708-96b9-1fac57ff03cb" path="/var/lib/kubelet/pods/c049f95e-1766-4708-96b9-1fac57ff03cb/volumes" Jan 29 12:09:35 crc kubenswrapper[4660]: I0129 12:09:35.755250 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.573444 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.574591 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.579884 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.580636 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.580752 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.580825 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.581012 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.583223 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.583620 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.590846 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.680113 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.680557 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mm5\" (UniqueName: \"kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.680745 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.680886 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.681026 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.782467 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.782890 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.783045 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.783137 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.783229 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mm5\" (UniqueName: \"kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.783981 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.785391 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.786144 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.797607 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.800466 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mm5\" (UniqueName: \"kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5\") pod \"controller-manager-5f976bb66f-thqg2\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.882455 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" event={"ID":"e45cc055-9cef-43ec-b66c-a9474f0f5b46","Type":"ContainerStarted","Data":"21a2aad043a16d9c625e95cbfa2dee75e2a26266b70e74016f9a2c6abfa21d6f"} Jan 29 12:09:36 crc kubenswrapper[4660]: I0129 12:09:36.891226 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:37 crc kubenswrapper[4660]: I0129 12:09:37.890317 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerStarted","Data":"1a4c12a7c1653f3e40f6d5f341bf8a14c4be44f43d1b21597a978dc09bc0ac11"} Jan 29 12:09:38 crc kubenswrapper[4660]: I0129 12:09:38.913840 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x2dh" podStartSLOduration=6.678672066 podStartE2EDuration="1m7.913823903s" podCreationTimestamp="2026-01-29 12:08:31 +0000 UTC" firstStartedPulling="2026-01-29 12:08:34.314963038 +0000 UTC m=+151.537905170" lastFinishedPulling="2026-01-29 12:09:35.550114875 +0000 UTC m=+212.773057007" observedRunningTime="2026-01-29 12:09:38.911411794 +0000 UTC m=+216.134353936" watchObservedRunningTime="2026-01-29 12:09:38.913823903 +0000 UTC m=+216.136766035" Jan 29 12:09:41 crc kubenswrapper[4660]: I0129 12:09:41.825157 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:41 crc kubenswrapper[4660]: I0129 12:09:41.825538 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:41 crc kubenswrapper[4660]: I0129 12:09:41.862175 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:41 crc kubenswrapper[4660]: I0129 12:09:41.973015 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:42 crc kubenswrapper[4660]: I0129 12:09:42.707719 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:43 crc kubenswrapper[4660]: W0129 12:09:43.043190 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04a42c98_e16e_4b8a_90a4_f17d06dcfbcd.slice/crio-7992134e566792ef155946ec14b947cd242df817f5ec185fa5c2174c81c3229c WatchSource:0}: Error finding container 7992134e566792ef155946ec14b947cd242df817f5ec185fa5c2174c81c3229c: Status 404 returned error can't find the container with id 7992134e566792ef155946ec14b947cd242df817f5ec185fa5c2174c81c3229c Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.449009 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.518754 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.928648 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" event={"ID":"e45cc055-9cef-43ec-b66c-a9474f0f5b46","Type":"ContainerStarted","Data":"422dcac1a5b70d75c9d633bd8da4352a9df2526e0eef98ff85b058f503dd7c90"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.929975 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.932139 4660 generic.go:334] "Generic (PLEG): container finished" podID="2f46b165-b1bd-42ea-8704-adafba36b152" containerID="2a34d2cdd8fa74712edb9ad7f825e0ac6185ed7e03cc80a5007fb126a19c93e3" exitCode=0 Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.932200 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerDied","Data":"2a34d2cdd8fa74712edb9ad7f825e0ac6185ed7e03cc80a5007fb126a19c93e3"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.935247 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerStarted","Data":"26709b15264a5cda86f43cbb1863cc4d4a6f957d3b120cc4bb383ac4d955cfcf"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.937393 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" event={"ID":"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd","Type":"ContainerStarted","Data":"5a83a52421f2b7224defa928ca57a81ec43eb9d2a34a9291188dcbce0044b054"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.937448 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" event={"ID":"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd","Type":"ContainerStarted","Data":"7992134e566792ef155946ec14b947cd242df817f5ec185fa5c2174c81c3229c"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.938331 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.940188 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerStarted","Data":"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.942833 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerID="67a8f929923488a2c3f7513a24666f6570edd6d9806753b081c288bb1ec1de42" exitCode=0 Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.942888 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerDied","Data":"67a8f929923488a2c3f7513a24666f6570edd6d9806753b081c288bb1ec1de42"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.943791 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.948244 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerStarted","Data":"46e34cfab99d1e4cfb7a5c5dee600a15638ad53355d9904ae8e6695424a509d4"} Jan 29 12:09:43 crc kubenswrapper[4660]: I0129 12:09:43.956355 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.012512 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" podStartSLOduration=11.012482277 podStartE2EDuration="11.012482277s" podCreationTimestamp="2026-01-29 12:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:43.975672866 +0000 UTC m=+221.198614998" watchObservedRunningTime="2026-01-29 12:09:44.012482277 +0000 UTC m=+221.235424409" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.257038 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49j6q" podStartSLOduration=6.583087244 podStartE2EDuration="1m13.257023364s" podCreationTimestamp="2026-01-29 12:08:31 +0000 UTC" firstStartedPulling="2026-01-29 12:08:35.636638835 +0000 UTC m=+152.859580967" lastFinishedPulling="2026-01-29 12:09:42.310574955 +0000 UTC m=+219.533517087" observedRunningTime="2026-01-29 12:09:44.181608281 +0000 UTC m=+221.404550433" watchObservedRunningTime="2026-01-29 12:09:44.257023364 +0000 UTC m=+221.479965496" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.295122 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lgwk" podStartSLOduration=6.154276376 podStartE2EDuration="1m9.295105942s" podCreationTimestamp="2026-01-29 12:08:35 +0000 UTC" firstStartedPulling="2026-01-29 12:08:39.052297897 +0000 UTC m=+156.275240029" lastFinishedPulling="2026-01-29 12:09:42.193127463 +0000 UTC m=+219.416069595" observedRunningTime="2026-01-29 12:09:44.259422802 +0000 UTC m=+221.482364934" watchObservedRunningTime="2026-01-29 12:09:44.295105942 +0000 UTC m=+221.518048074" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.295985 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" podStartSLOduration=11.295981166 podStartE2EDuration="11.295981166s" podCreationTimestamp="2026-01-29 12:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:44.295952796 +0000 UTC m=+221.518894948" watchObservedRunningTime="2026-01-29 12:09:44.295981166 +0000 UTC m=+221.518923298" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.954861 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerStarted","Data":"9186310484b4ac28a6449720f0e34988c4346cded97ea6711c705d3223b5b2fb"} Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.957435 4660 generic.go:334] "Generic (PLEG): container finished" podID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerID="0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a" exitCode=0 Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.957482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerDied","Data":"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a"} Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.961115 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerStarted","Data":"278d819aa7bfcd3285b3f5bead710b8d4086d2bdcab86741940a526d83d5972c"} Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.996092 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bchtm" podStartSLOduration=3.926475682 podStartE2EDuration="1m12.996071579s" podCreationTimestamp="2026-01-29 12:08:32 +0000 UTC" firstStartedPulling="2026-01-29 12:08:35.597806286 +0000 UTC m=+152.820748418" lastFinishedPulling="2026-01-29 12:09:44.667402183 +0000 UTC m=+221.890344315" observedRunningTime="2026-01-29 12:09:44.976074253 +0000 UTC m=+222.199016395" watchObservedRunningTime="2026-01-29 12:09:44.996071579 +0000 UTC m=+222.219013711" Jan 29 12:09:44 crc kubenswrapper[4660]: I0129 12:09:44.997097 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x95ls" podStartSLOduration=4.120534496 podStartE2EDuration="1m11.997091798s" podCreationTimestamp="2026-01-29 12:08:33 +0000 UTC" firstStartedPulling="2026-01-29 12:08:36.707786791 +0000 UTC m=+153.930728923" lastFinishedPulling="2026-01-29 12:09:44.584344093 +0000 UTC m=+221.807286225" observedRunningTime="2026-01-29 12:09:44.993847926 +0000 UTC m=+222.216790068" watchObservedRunningTime="2026-01-29 12:09:44.997091798 +0000 UTC m=+222.220033930" Jan 29 12:09:45 crc kubenswrapper[4660]: I0129 12:09:45.767139 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:09:45 crc kubenswrapper[4660]: I0129 12:09:45.767445 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:09:45 crc kubenswrapper[4660]: I0129 12:09:45.969200 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerStarted","Data":"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73"} Jan 29 12:09:45 crc kubenswrapper[4660]: I0129 12:09:45.995840 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlxjk" podStartSLOduration=5.690532941 podStartE2EDuration="1m11.995824039s" podCreationTimestamp="2026-01-29 12:08:34 +0000 UTC" firstStartedPulling="2026-01-29 12:08:39.051855894 +0000 UTC m=+156.274798026" lastFinishedPulling="2026-01-29 12:09:45.357146992 +0000 UTC m=+222.580089124" observedRunningTime="2026-01-29 12:09:45.992649319 +0000 UTC m=+223.215591451" watchObservedRunningTime="2026-01-29 12:09:45.995824039 +0000 UTC m=+223.218766171" Jan 29 12:09:46 crc kubenswrapper[4660]: I0129 12:09:46.806534 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7lgwk" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="registry-server" probeResult="failure" output=< Jan 29 12:09:46 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Jan 29 12:09:46 crc kubenswrapper[4660]: > Jan 29 12:09:51 crc kubenswrapper[4660]: I0129 12:09:51.200885 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h994k"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.015648 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.016239 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bchtm" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="registry-server" containerID="cri-o://9186310484b4ac28a6449720f0e34988c4346cded97ea6711c705d3223b5b2fb" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.025922 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.026187 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvv98" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="registry-server" containerID="cri-o://88fd49af8fac05bc9a1028b98e7537cf63eefc3c94fcbe1f93cc13ce55ad72b3" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.033975 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.034264 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49j6q" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="registry-server" containerID="cri-o://46e34cfab99d1e4cfb7a5c5dee600a15638ad53355d9904ae8e6695424a509d4" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.043336 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.043619 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9x2dh" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="registry-server" containerID="cri-o://1a4c12a7c1653f3e40f6d5f341bf8a14c4be44f43d1b21597a978dc09bc0ac11" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.056294 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.056489 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" containerID="cri-o://6b0f243b76e092081d7416edfb2c9b9504506471b90b05e816bff2a78061d812" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.070199 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.070532 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwwx4" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" containerID="cri-o://6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.080064 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.080317 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x95ls" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="registry-server" containerID="cri-o://278d819aa7bfcd3285b3f5bead710b8d4086d2bdcab86741940a526d83d5972c" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.091916 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.092201 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lgwk" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="registry-server" containerID="cri-o://26709b15264a5cda86f43cbb1863cc4d4a6f957d3b120cc4bb383ac4d955cfcf" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.093186 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shqms"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.094025 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.118131 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.118522 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlxjk" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="registry-server" containerID="cri-o://1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73" gracePeriod=30 Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.121835 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shqms"] Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.250203 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62g9\" (UniqueName: \"kubernetes.io/projected/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-kube-api-access-w62g9\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.250262 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.250427 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.352598 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62g9\" (UniqueName: \"kubernetes.io/projected/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-kube-api-access-w62g9\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.352670 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.354909 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.355965 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.360861 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.369681 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62g9\" (UniqueName: \"kubernetes.io/projected/f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625-kube-api-access-w62g9\") pod \"marketplace-operator-79b997595-shqms\" (UID: \"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625\") " pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.412088 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.478174 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.495124 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:09:52 crc kubenswrapper[4660]: I0129 12:09:52.907898 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shqms"] Jan 29 12:09:53 crc kubenswrapper[4660]: W0129 12:09:53.012515 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d3a7e0_3f4d_4e66_ad34_4c7835e0b625.slice/crio-21b6461abffbdcc3ba011e199cc17d7724c40ac23b2298ea0d09fd50cd7f5266 WatchSource:0}: Error finding container 21b6461abffbdcc3ba011e199cc17d7724c40ac23b2298ea0d09fd50cd7f5266: Status 404 returned error can't find the container with id 21b6461abffbdcc3ba011e199cc17d7724c40ac23b2298ea0d09fd50cd7f5266 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.014136 4660 generic.go:334] "Generic (PLEG): container finished" podID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerID="9186310484b4ac28a6449720f0e34988c4346cded97ea6711c705d3223b5b2fb" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.014188 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerDied","Data":"9186310484b4ac28a6449720f0e34988c4346cded97ea6711c705d3223b5b2fb"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.016232 4660 generic.go:334] "Generic (PLEG): container finished" podID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerID="46e34cfab99d1e4cfb7a5c5dee600a15638ad53355d9904ae8e6695424a509d4" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.016278 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerDied","Data":"46e34cfab99d1e4cfb7a5c5dee600a15638ad53355d9904ae8e6695424a509d4"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.019720 4660 generic.go:334] "Generic (PLEG): container finished" podID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerID="6b0f243b76e092081d7416edfb2c9b9504506471b90b05e816bff2a78061d812" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.019768 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" event={"ID":"2a6abc12-af78-4433-84ae-8421ade4d80c","Type":"ContainerDied","Data":"6b0f243b76e092081d7416edfb2c9b9504506471b90b05e816bff2a78061d812"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.033849 4660 generic.go:334] "Generic (PLEG): container finished" podID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerID="88fd49af8fac05bc9a1028b98e7537cf63eefc3c94fcbe1f93cc13ce55ad72b3" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.033910 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerDied","Data":"88fd49af8fac05bc9a1028b98e7537cf63eefc3c94fcbe1f93cc13ce55ad72b3"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.036545 4660 generic.go:334] "Generic (PLEG): container finished" podID="2f46b165-b1bd-42ea-8704-adafba36b152" containerID="278d819aa7bfcd3285b3f5bead710b8d4086d2bdcab86741940a526d83d5972c" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.036583 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerDied","Data":"278d819aa7bfcd3285b3f5bead710b8d4086d2bdcab86741940a526d83d5972c"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.040206 4660 generic.go:334] "Generic (PLEG): container finished" podID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerID="6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.040266 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerDied","Data":"6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.045956 4660 generic.go:334] "Generic (PLEG): container finished" podID="42a55618-51a2-4df8-b139-ed326fd6371f" containerID="26709b15264a5cda86f43cbb1863cc4d4a6f957d3b120cc4bb383ac4d955cfcf" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.046027 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerDied","Data":"26709b15264a5cda86f43cbb1863cc4d4a6f957d3b120cc4bb383ac4d955cfcf"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.050523 4660 generic.go:334] "Generic (PLEG): container finished" podID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerID="1a4c12a7c1653f3e40f6d5f341bf8a14c4be44f43d1b21597a978dc09bc0ac11" exitCode=0 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.050558 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerDied","Data":"1a4c12a7c1653f3e40f6d5f341bf8a14c4be44f43d1b21597a978dc09bc0ac11"} Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.101830 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.235396 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.235586 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" podUID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" containerName="controller-manager" containerID="cri-o://5a83a52421f2b7224defa928ca57a81ec43eb9d2a34a9291188dcbce0044b054" gracePeriod=30 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.272393 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca\") pod \"2a6abc12-af78-4433-84ae-8421ade4d80c\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.272473 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jgzj\" (UniqueName: \"kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj\") pod \"2a6abc12-af78-4433-84ae-8421ade4d80c\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.272492 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics\") pod \"2a6abc12-af78-4433-84ae-8421ade4d80c\" (UID: \"2a6abc12-af78-4433-84ae-8421ade4d80c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.274010 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2a6abc12-af78-4433-84ae-8421ade4d80c" (UID: "2a6abc12-af78-4433-84ae-8421ade4d80c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.284447 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj" (OuterVolumeSpecName: "kube-api-access-6jgzj") pod "2a6abc12-af78-4433-84ae-8421ade4d80c" (UID: "2a6abc12-af78-4433-84ae-8421ade4d80c"). InnerVolumeSpecName "kube-api-access-6jgzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.286327 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2a6abc12-af78-4433-84ae-8421ade4d80c" (UID: "2a6abc12-af78-4433-84ae-8421ade4d80c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.345391 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.345606 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" podUID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" containerName="route-controller-manager" containerID="cri-o://422dcac1a5b70d75c9d633bd8da4352a9df2526e0eef98ff85b058f503dd7c90" gracePeriod=30 Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.373728 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.373762 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jgzj\" (UniqueName: \"kubernetes.io/projected/2a6abc12-af78-4433-84ae-8421ade4d80c-kube-api-access-6jgzj\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.373775 4660 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2a6abc12-af78-4433-84ae-8421ade4d80c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: E0129 12:09:53.393015 4660 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8 is running failed: container process not found" containerID="6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:09:53 crc kubenswrapper[4660]: E0129 12:09:53.393548 4660 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8 is running failed: container process not found" containerID="6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:09:53 crc kubenswrapper[4660]: E0129 12:09:53.394013 4660 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8 is running failed: container process not found" containerID="6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:09:53 crc kubenswrapper[4660]: E0129 12:09:53.394058 4660 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qwwx4" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.594785 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.680810 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4b4m\" (UniqueName: \"kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m\") pod \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.680995 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities\") pod \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.681100 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content\") pod \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\" (UID: \"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.687330 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m" (OuterVolumeSpecName: "kube-api-access-f4b4m") pod "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" (UID: "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea"). InnerVolumeSpecName "kube-api-access-f4b4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.687840 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities" (OuterVolumeSpecName: "utilities") pod "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" (UID: "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.746984 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.782672 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4b4m\" (UniqueName: \"kubernetes.io/projected/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-kube-api-access-f4b4m\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.782724 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.799058 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" (UID: "e02cd887-c6fb-48e5-9c08-23bb0bffd1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.832579 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.883082 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.884105 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities\") pod \"d70ebf77-b129-4e52-85f6-f969b97e855e\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.884198 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfml4\" (UniqueName: \"kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4\") pod \"d70ebf77-b129-4e52-85f6-f969b97e855e\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.884248 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content\") pod \"d70ebf77-b129-4e52-85f6-f969b97e855e\" (UID: \"d70ebf77-b129-4e52-85f6-f969b97e855e\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.884478 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.894002 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4" (OuterVolumeSpecName: "kube-api-access-vfml4") pod "d70ebf77-b129-4e52-85f6-f969b97e855e" (UID: "d70ebf77-b129-4e52-85f6-f969b97e855e"). InnerVolumeSpecName "kube-api-access-vfml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.894738 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.895408 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities" (OuterVolumeSpecName: "utilities") pod "d70ebf77-b129-4e52-85f6-f969b97e855e" (UID: "d70ebf77-b129-4e52-85f6-f969b97e855e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.988452 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989073 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities\") pod \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989119 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content\") pod \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989145 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6tq\" (UniqueName: \"kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq\") pod \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989167 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content\") pod \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\" (UID: \"d4b35a13-7853-4d23-9cd4-015b2d10d25a\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities\") pod \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989248 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv294\" (UniqueName: \"kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294\") pod \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\" (UID: \"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c\") " Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989484 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.989503 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfml4\" (UniqueName: \"kubernetes.io/projected/d70ebf77-b129-4e52-85f6-f969b97e855e-kube-api-access-vfml4\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.994834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq" (OuterVolumeSpecName: "kube-api-access-nw6tq") pod "d4b35a13-7853-4d23-9cd4-015b2d10d25a" (UID: "d4b35a13-7853-4d23-9cd4-015b2d10d25a"). InnerVolumeSpecName "kube-api-access-nw6tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.995099 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities" (OuterVolumeSpecName: "utilities") pod "d4b35a13-7853-4d23-9cd4-015b2d10d25a" (UID: "d4b35a13-7853-4d23-9cd4-015b2d10d25a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.996942 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294" (OuterVolumeSpecName: "kube-api-access-sv294") pod "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" (UID: "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c"). InnerVolumeSpecName "kube-api-access-sv294". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:53 crc kubenswrapper[4660]: I0129 12:09:53.997664 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities" (OuterVolumeSpecName: "utilities") pod "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" (UID: "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.056359 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.073381 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49j6q" event={"ID":"d70ebf77-b129-4e52-85f6-f969b97e855e","Type":"ContainerDied","Data":"1ab6cae4eb18ffa5466ddfb084ee959cce6c81f712b587690ebfafa4ddb67edd"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.073605 4660 scope.go:117] "RemoveContainer" containerID="46e34cfab99d1e4cfb7a5c5dee600a15638ad53355d9904ae8e6695424a509d4" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.073845 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49j6q" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.082572 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvv98" event={"ID":"58e4fad5-42bd-4652-9a66-d80ada00bb84","Type":"ContainerDied","Data":"aaa6914d386046aa926b2d1b9eaba80696935f953094d820d83681558dc4e306"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.082684 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvv98" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.086181 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" event={"ID":"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625","Type":"ContainerStarted","Data":"a377f588b3f78ed7173fcf8cf162645869b4f520b85ccfdcdb9785889e8b2c3b"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.086221 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" event={"ID":"f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625","Type":"ContainerStarted","Data":"21b6461abffbdcc3ba011e199cc17d7724c40ac23b2298ea0d09fd50cd7f5266"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.087915 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.090748 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities\") pod \"58e4fad5-42bd-4652-9a66-d80ada00bb84\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.090821 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content\") pod \"58e4fad5-42bd-4652-9a66-d80ada00bb84\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.090854 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9n24\" (UniqueName: \"kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24\") pod \"58e4fad5-42bd-4652-9a66-d80ada00bb84\" (UID: \"58e4fad5-42bd-4652-9a66-d80ada00bb84\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.091114 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv294\" (UniqueName: \"kubernetes.io/projected/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-kube-api-access-sv294\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.091132 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.091144 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6tq\" (UniqueName: \"kubernetes.io/projected/d4b35a13-7853-4d23-9cd4-015b2d10d25a-kube-api-access-nw6tq\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.091155 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.093880 4660 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-shqms container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" start-of-body= Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.093951 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" podUID="f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": dial tcp 10.217.0.64:8080: connect: connection refused" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.095459 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities" (OuterVolumeSpecName: "utilities") pod "58e4fad5-42bd-4652-9a66-d80ada00bb84" (UID: "58e4fad5-42bd-4652-9a66-d80ada00bb84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.097000 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70ebf77-b129-4e52-85f6-f969b97e855e" (UID: "d70ebf77-b129-4e52-85f6-f969b97e855e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.097454 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24" (OuterVolumeSpecName: "kube-api-access-t9n24") pod "58e4fad5-42bd-4652-9a66-d80ada00bb84" (UID: "58e4fad5-42bd-4652-9a66-d80ada00bb84"). InnerVolumeSpecName "kube-api-access-t9n24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.107265 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2dh" event={"ID":"d4b35a13-7853-4d23-9cd4-015b2d10d25a","Type":"ContainerDied","Data":"d97ec8765ff60273a93a3340222daf375388379eeb99d27ae89eb418adac4b51"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.107317 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2dh" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.116174 4660 scope.go:117] "RemoveContainer" containerID="351453542b518a53669db4fbd737dfa34afa77ef3fba903b1c0e999d250ffeaf" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.128557 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.128840 4660 generic.go:334] "Generic (PLEG): container finished" podID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" containerID="5a83a52421f2b7224defa928ca57a81ec43eb9d2a34a9291188dcbce0044b054" exitCode=0 Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.128892 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" event={"ID":"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd","Type":"ContainerDied","Data":"5a83a52421f2b7224defa928ca57a81ec43eb9d2a34a9291188dcbce0044b054"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.138848 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" podStartSLOduration=2.138832398 podStartE2EDuration="2.138832398s" podCreationTimestamp="2026-01-29 12:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:54.136504202 +0000 UTC m=+231.359446334" watchObservedRunningTime="2026-01-29 12:09:54.138832398 +0000 UTC m=+231.361774530" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.153020 4660 scope.go:117] "RemoveContainer" containerID="a8da9774864d0e9ea7a45b0298160b447d2f5f25437ced569edd919ede8e2a72" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.153357 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bchtm" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.153749 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bchtm" event={"ID":"6ff71e85-46e1-4ae6-a9bc-d721e1d5248c","Type":"ContainerDied","Data":"1afe7ff53a152e5aef206b0447f1a331c07609b802a7160d52cb252a018d2744"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.177729 4660 generic.go:334] "Generic (PLEG): container finished" podID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerID="1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73" exitCode=0 Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.177870 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxjk" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.178113 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerDied","Data":"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.178149 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxjk" event={"ID":"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae","Type":"ContainerDied","Data":"13a723ac3c0592037448ad19161c790af44f27cb30e5e472e1eb0ca1dae26db5"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.181474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b35a13-7853-4d23-9cd4-015b2d10d25a" (UID: "d4b35a13-7853-4d23-9cd4-015b2d10d25a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.181797 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" (UID: "6ff71e85-46e1-4ae6-a9bc-d721e1d5248c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.183539 4660 generic.go:334] "Generic (PLEG): container finished" podID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" containerID="422dcac1a5b70d75c9d633bd8da4352a9df2526e0eef98ff85b058f503dd7c90" exitCode=0 Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.183647 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" event={"ID":"e45cc055-9cef-43ec-b66c-a9474f0f5b46","Type":"ContainerDied","Data":"422dcac1a5b70d75c9d633bd8da4352a9df2526e0eef98ff85b058f503dd7c90"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.185367 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.185368 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grt4k" event={"ID":"2a6abc12-af78-4433-84ae-8421ade4d80c","Type":"ContainerDied","Data":"ac5cfe210741ee473e054f0449469ef0b4df5994070032478ec10765b56c9700"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.189777 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwwx4" event={"ID":"e02cd887-c6fb-48e5-9c08-23bb0bffd1ea","Type":"ContainerDied","Data":"99e23e23cb41491286b225adb5c1c1e5b1522499fb1ae58cf9e9f9dfff5866b2"} Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.189852 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwwx4" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192117 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities\") pod \"42a55618-51a2-4df8-b139-ed326fd6371f\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192167 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content\") pod \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192212 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content\") pod \"42a55618-51a2-4df8-b139-ed326fd6371f\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192296 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxpfm\" (UniqueName: \"kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm\") pod \"42a55618-51a2-4df8-b139-ed326fd6371f\" (UID: \"42a55618-51a2-4df8-b139-ed326fd6371f\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192332 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xznfq\" (UniqueName: \"kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq\") pod \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192358 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities\") pod \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\" (UID: \"dccda9d5-1d0b-4ba3-a3e4-07234d4596ae\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192599 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b35a13-7853-4d23-9cd4-015b2d10d25a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192616 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9n24\" (UniqueName: \"kubernetes.io/projected/58e4fad5-42bd-4652-9a66-d80ada00bb84-kube-api-access-t9n24\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192626 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70ebf77-b129-4e52-85f6-f969b97e855e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192636 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.192646 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.196364 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities" (OuterVolumeSpecName: "utilities") pod "42a55618-51a2-4df8-b139-ed326fd6371f" (UID: "42a55618-51a2-4df8-b139-ed326fd6371f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.198716 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities" (OuterVolumeSpecName: "utilities") pod "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" (UID: "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.209273 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm" (OuterVolumeSpecName: "kube-api-access-dxpfm") pod "42a55618-51a2-4df8-b139-ed326fd6371f" (UID: "42a55618-51a2-4df8-b139-ed326fd6371f"). InnerVolumeSpecName "kube-api-access-dxpfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.209366 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq" (OuterVolumeSpecName: "kube-api-access-xznfq") pod "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" (UID: "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae"). InnerVolumeSpecName "kube-api-access-xznfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.224667 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e4fad5-42bd-4652-9a66-d80ada00bb84" (UID: "58e4fad5-42bd-4652-9a66-d80ada00bb84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.250585 4660 scope.go:117] "RemoveContainer" containerID="88fd49af8fac05bc9a1028b98e7537cf63eefc3c94fcbe1f93cc13ce55ad72b3" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.272876 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.282977 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grt4k"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.293328 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxpfm\" (UniqueName: \"kubernetes.io/projected/42a55618-51a2-4df8-b139-ed326fd6371f-kube-api-access-dxpfm\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.293346 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xznfq\" (UniqueName: \"kubernetes.io/projected/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-kube-api-access-xznfq\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.293356 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.293366 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4fad5-42bd-4652-9a66-d80ada00bb84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.293377 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.296953 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.304034 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwwx4"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.314005 4660 scope.go:117] "RemoveContainer" containerID="920eaa381bf6d03ae4d025c8e8538d02bdda4b107d5cf2e9620e19f962e02a90" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.332867 4660 scope.go:117] "RemoveContainer" containerID="bda38277f6c53a0ee3348736b484b959407c3aefc108b07b416abe680ab1dfba" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.356834 4660 scope.go:117] "RemoveContainer" containerID="1a4c12a7c1653f3e40f6d5f341bf8a14c4be44f43d1b21597a978dc09bc0ac11" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.411775 4660 scope.go:117] "RemoveContainer" containerID="3e99974af0afc770aac40869bd81bd0d8cea24cb838f73c30f23a440a54c4276" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.425726 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.438303 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49j6q"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.443889 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.447423 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9x2dh"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.456481 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42a55618-51a2-4df8-b139-ed326fd6371f" (UID: "42a55618-51a2-4df8-b139-ed326fd6371f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.468226 4660 scope.go:117] "RemoveContainer" containerID="2fc059651739b5fe4bbc5a76110f43fbcdc8e4d41ed2acf149266abf4b1925e2" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.468402 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.469367 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvv98"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.478053 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.482858 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.485151 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" (UID: "dccda9d5-1d0b-4ba3-a3e4-07234d4596ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.496309 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.496343 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a55618-51a2-4df8-b139-ed326fd6371f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.497241 4660 scope.go:117] "RemoveContainer" containerID="9186310484b4ac28a6449720f0e34988c4346cded97ea6711c705d3223b5b2fb" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.517121 4660 scope.go:117] "RemoveContainer" containerID="67a8f929923488a2c3f7513a24666f6570edd6d9806753b081c288bb1ec1de42" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.562938 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.562997 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bchtm"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.568595 4660 scope.go:117] "RemoveContainer" containerID="2c3f3baec840fcbbd39b9d6d7a5f3475854d9ea03653076dd249f6fb271c56ea" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.580089 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.599983 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca\") pod \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.600321 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config\") pod \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601260 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config" (OuterVolumeSpecName: "config") pod "e45cc055-9cef-43ec-b66c-a9474f0f5b46" (UID: "e45cc055-9cef-43ec-b66c-a9474f0f5b46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601673 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content\") pod \"2f46b165-b1bd-42ea-8704-adafba36b152\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601728 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities\") pod \"2f46b165-b1bd-42ea-8704-adafba36b152\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601766 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th8lm\" (UniqueName: \"kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm\") pod \"2f46b165-b1bd-42ea-8704-adafba36b152\" (UID: \"2f46b165-b1bd-42ea-8704-adafba36b152\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601807 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74852\" (UniqueName: \"kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852\") pod \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.601845 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert\") pod \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\" (UID: \"e45cc055-9cef-43ec-b66c-a9474f0f5b46\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.602280 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.607957 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca" (OuterVolumeSpecName: "client-ca") pod "e45cc055-9cef-43ec-b66c-a9474f0f5b46" (UID: "e45cc055-9cef-43ec-b66c-a9474f0f5b46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.612705 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e45cc055-9cef-43ec-b66c-a9474f0f5b46" (UID: "e45cc055-9cef-43ec-b66c-a9474f0f5b46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.613276 4660 scope.go:117] "RemoveContainer" containerID="1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614508 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities" (OuterVolumeSpecName: "utilities") pod "2f46b165-b1bd-42ea-8704-adafba36b152" (UID: "2f46b165-b1bd-42ea-8704-adafba36b152"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614561 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq"] Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614845 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614881 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614889 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614895 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614903 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" containerName="route-controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614909 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" containerName="route-controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614916 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614922 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614961 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614967 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614976 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614981 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.614987 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.614993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.615002 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615008 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.615039 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" containerName="controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615046 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" containerName="controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.615054 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615060 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.615067 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615073 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.615080 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615085 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615233 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852" (OuterVolumeSpecName: "kube-api-access-74852") pod "e45cc055-9cef-43ec-b66c-a9474f0f5b46" (UID: "e45cc055-9cef-43ec-b66c-a9474f0f5b46"). InnerVolumeSpecName "kube-api-access-74852". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.615434 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm" (OuterVolumeSpecName: "kube-api-access-th8lm") pod "2f46b165-b1bd-42ea-8704-adafba36b152" (UID: "2f46b165-b1bd-42ea-8704-adafba36b152"). InnerVolumeSpecName "kube-api-access-th8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616373 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616386 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616395 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616403 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616473 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616483 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616494 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616502 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616513 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616521 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616529 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616536 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616546 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616552 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616562 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616568 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616596 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616603 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616614 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616619 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616627 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616633 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616641 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616647 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616655 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616661 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="extract-utilities" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616667 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616673 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.616679 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616707 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="extract-content" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616812 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616824 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" containerName="controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616833 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616840 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616867 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616876 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616883 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" containerName="marketplace-operator" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616892 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616900 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616906 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" containerName="registry-server" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.616912 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" containerName="route-controller-manager" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.617306 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.645584 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.654027 4660 scope.go:117] "RemoveContainer" containerID="0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.681858 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f46b165-b1bd-42ea-8704-adafba36b152" (UID: "2f46b165-b1bd-42ea-8704-adafba36b152"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.694250 4660 scope.go:117] "RemoveContainer" containerID="6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.702751 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca\") pod \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703085 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles\") pod \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703195 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert\") pod \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703240 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mm5\" (UniqueName: \"kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5\") pod \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703273 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config\") pod \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\" (UID: \"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd\") " Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703445 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-config\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703508 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c8d8060-f4a5-4810-9d4c-df286281c993-serving-cert\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703542 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsxq\" (UniqueName: \"kubernetes.io/projected/1c8d8060-f4a5-4810-9d4c-df286281c993-kube-api-access-zxsxq\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703576 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-client-ca\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703637 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703653 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f46b165-b1bd-42ea-8704-adafba36b152-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703662 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th8lm\" (UniqueName: \"kubernetes.io/projected/2f46b165-b1bd-42ea-8704-adafba36b152-kube-api-access-th8lm\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703672 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74852\" (UniqueName: \"kubernetes.io/projected/e45cc055-9cef-43ec-b66c-a9474f0f5b46-kube-api-access-74852\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703681 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45cc055-9cef-43ec-b66c-a9474f0f5b46-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703704 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45cc055-9cef-43ec-b66c-a9474f0f5b46-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.703914 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" (UID: "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.704134 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" (UID: "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.704259 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config" (OuterVolumeSpecName: "config") pod "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" (UID: "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.706176 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" (UID: "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.706565 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5" (OuterVolumeSpecName: "kube-api-access-96mm5") pod "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" (UID: "04a42c98-e16e-4b8a-90a4-f17d06dcfbcd"). InnerVolumeSpecName "kube-api-access-96mm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.714631 4660 scope.go:117] "RemoveContainer" containerID="1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.715165 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73\": container with ID starting with 1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73 not found: ID does not exist" containerID="1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.715206 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73"} err="failed to get container status \"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73\": rpc error: code = NotFound desc = could not find container \"1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73\": container with ID starting with 1373dbfcfacf7889cc2b92d948647b92b055055582d47a611beb5f3239c1ac73 not found: ID does not exist" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.715235 4660 scope.go:117] "RemoveContainer" containerID="0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.715645 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a\": container with ID starting with 0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a not found: ID does not exist" containerID="0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.715673 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a"} err="failed to get container status \"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a\": rpc error: code = NotFound desc = could not find container \"0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a\": container with ID starting with 0e9061321aa39ee511c3fb5c31f6a49e12edaca8da4083d147ac4c3a9be3803a not found: ID does not exist" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.715732 4660 scope.go:117] "RemoveContainer" containerID="6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64" Jan 29 12:09:54 crc kubenswrapper[4660]: E0129 12:09:54.716118 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64\": container with ID starting with 6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64 not found: ID does not exist" containerID="6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.716144 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64"} err="failed to get container status \"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64\": rpc error: code = NotFound desc = could not find container \"6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64\": container with ID starting with 6e652b94b7122fd58c52839d1ef881869e357215301718bafc99f919f422fe64 not found: ID does not exist" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.716161 4660 scope.go:117] "RemoveContainer" containerID="6b0f243b76e092081d7416edfb2c9b9504506471b90b05e816bff2a78061d812" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.730833 4660 scope.go:117] "RemoveContainer" containerID="6f20192daf96734714618a8926c62c21c0c891c33533c0dbe88f2790b0e38af8" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.744374 4660 scope.go:117] "RemoveContainer" containerID="1aa753813f78692639aa9b672d2a4425011c71a8bba2a365f7dcf2a330bee17e" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.759416 4660 scope.go:117] "RemoveContainer" containerID="13c8d1efa6d7efa2fd977593b5bfdfb4c020c11c461cc8cdb83baf2fe6b73c4e" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805233 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-config\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805439 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c8d8060-f4a5-4810-9d4c-df286281c993-serving-cert\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805466 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxsxq\" (UniqueName: \"kubernetes.io/projected/1c8d8060-f4a5-4810-9d4c-df286281c993-kube-api-access-zxsxq\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805517 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-client-ca\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805586 4660 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805598 4660 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805607 4660 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805617 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mm5\" (UniqueName: \"kubernetes.io/projected/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-kube-api-access-96mm5\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.805626 4660 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.806416 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.806990 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-client-ca\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.809835 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlxjk"] Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.813443 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c8d8060-f4a5-4810-9d4c-df286281c993-serving-cert\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.825148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxsxq\" (UniqueName: \"kubernetes.io/projected/1c8d8060-f4a5-4810-9d4c-df286281c993-kube-api-access-zxsxq\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.854381 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c8d8060-f4a5-4810-9d4c-df286281c993-config\") pod \"route-controller-manager-64ff76bc74-rdppq\" (UID: \"1c8d8060-f4a5-4810-9d4c-df286281c993\") " pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:54 crc kubenswrapper[4660]: I0129 12:09:54.955290 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.205794 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgwk" event={"ID":"42a55618-51a2-4df8-b139-ed326fd6371f","Type":"ContainerDied","Data":"2af1b7031b9ebdf1167d016d08ca41dd9bb2355e2017d1907b0030ea1dac3e53"} Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.205839 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgwk" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.205849 4660 scope.go:117] "RemoveContainer" containerID="26709b15264a5cda86f43cbb1863cc4d4a6f957d3b120cc4bb383ac4d955cfcf" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.208779 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" event={"ID":"04a42c98-e16e-4b8a-90a4-f17d06dcfbcd","Type":"ContainerDied","Data":"7992134e566792ef155946ec14b947cd242df817f5ec185fa5c2174c81c3229c"} Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.208819 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f976bb66f-thqg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.216532 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x95ls" event={"ID":"2f46b165-b1bd-42ea-8704-adafba36b152","Type":"ContainerDied","Data":"f59b5db63d3ad6fb6714bf9266c3b1752fecbf78430781e4a9e1946c65c47858"} Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.216573 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x95ls" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.219580 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" event={"ID":"e45cc055-9cef-43ec-b66c-a9474f0f5b46","Type":"ContainerDied","Data":"21a2aad043a16d9c625e95cbfa2dee75e2a26266b70e74016f9a2c6abfa21d6f"} Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.219644 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.226937 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-shqms" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.227202 4660 scope.go:117] "RemoveContainer" containerID="417c38450471ca91e91be6ca0fca649416566606ad81b8ad33f1be65c20e12b2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.242888 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.246768 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f976bb66f-thqg2"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.257984 4660 scope.go:117] "RemoveContainer" containerID="d788e0b9197f44bd930004b3d39a5c74c5c0205570a44226290177b98dc56638" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.283668 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.303007 4660 scope.go:117] "RemoveContainer" containerID="5a83a52421f2b7224defa928ca57a81ec43eb9d2a34a9291188dcbce0044b054" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.303220 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lgwk"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.318048 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.325051 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bdff7df7b-p4z7c"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.325130 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trzg2"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.326718 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.332409 4660 scope.go:117] "RemoveContainer" containerID="278d819aa7bfcd3285b3f5bead710b8d4086d2bdcab86741940a526d83d5972c" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.333652 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.339140 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.342551 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x95ls"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.346738 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trzg2"] Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.360528 4660 scope.go:117] "RemoveContainer" containerID="2a34d2cdd8fa74712edb9ad7f825e0ac6185ed7e03cc80a5007fb126a19c93e3" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.378042 4660 scope.go:117] "RemoveContainer" containerID="aefcc489a109904c3d7b7ac7960be6d71605a55cf9c50acdd72d369be779d73d" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.392085 4660 scope.go:117] "RemoveContainer" containerID="422dcac1a5b70d75c9d633bd8da4352a9df2526e0eef98ff85b058f503dd7c90" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.403534 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq"] Jan 29 12:09:55 crc kubenswrapper[4660]: W0129 12:09:55.412462 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8d8060_f4a5_4810_9d4c_df286281c993.slice/crio-d854a4d5ff063818e699160c595a766d059125af1a0fd3c12d7cfa78bdfbd5d5 WatchSource:0}: Error finding container d854a4d5ff063818e699160c595a766d059125af1a0fd3c12d7cfa78bdfbd5d5: Status 404 returned error can't find the container with id d854a4d5ff063818e699160c595a766d059125af1a0fd3c12d7cfa78bdfbd5d5 Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.415564 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-utilities\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.415629 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk99m\" (UniqueName: \"kubernetes.io/projected/f78af36d-0cfb-438d-9763-cff2b46f13f7-kube-api-access-xk99m\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.415650 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-catalog-content\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.485184 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a42c98-e16e-4b8a-90a4-f17d06dcfbcd" path="/var/lib/kubelet/pods/04a42c98-e16e-4b8a-90a4-f17d06dcfbcd/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.485808 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6abc12-af78-4433-84ae-8421ade4d80c" path="/var/lib/kubelet/pods/2a6abc12-af78-4433-84ae-8421ade4d80c/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.486390 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f46b165-b1bd-42ea-8704-adafba36b152" path="/var/lib/kubelet/pods/2f46b165-b1bd-42ea-8704-adafba36b152/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.490623 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a55618-51a2-4df8-b139-ed326fd6371f" path="/var/lib/kubelet/pods/42a55618-51a2-4df8-b139-ed326fd6371f/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.491289 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e4fad5-42bd-4652-9a66-d80ada00bb84" path="/var/lib/kubelet/pods/58e4fad5-42bd-4652-9a66-d80ada00bb84/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.492420 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff71e85-46e1-4ae6-a9bc-d721e1d5248c" path="/var/lib/kubelet/pods/6ff71e85-46e1-4ae6-a9bc-d721e1d5248c/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.493198 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b35a13-7853-4d23-9cd4-015b2d10d25a" path="/var/lib/kubelet/pods/d4b35a13-7853-4d23-9cd4-015b2d10d25a/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.494908 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70ebf77-b129-4e52-85f6-f969b97e855e" path="/var/lib/kubelet/pods/d70ebf77-b129-4e52-85f6-f969b97e855e/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.496019 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccda9d5-1d0b-4ba3-a3e4-07234d4596ae" path="/var/lib/kubelet/pods/dccda9d5-1d0b-4ba3-a3e4-07234d4596ae/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.496966 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02cd887-c6fb-48e5-9c08-23bb0bffd1ea" path="/var/lib/kubelet/pods/e02cd887-c6fb-48e5-9c08-23bb0bffd1ea/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.498503 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45cc055-9cef-43ec-b66c-a9474f0f5b46" path="/var/lib/kubelet/pods/e45cc055-9cef-43ec-b66c-a9474f0f5b46/volumes" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.516424 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk99m\" (UniqueName: \"kubernetes.io/projected/f78af36d-0cfb-438d-9763-cff2b46f13f7-kube-api-access-xk99m\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.516615 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-catalog-content\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.516754 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-utilities\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.519318 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-catalog-content\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.519796 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f78af36d-0cfb-438d-9763-cff2b46f13f7-utilities\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.538851 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk99m\" (UniqueName: \"kubernetes.io/projected/f78af36d-0cfb-438d-9763-cff2b46f13f7-kube-api-access-xk99m\") pod \"certified-operators-trzg2\" (UID: \"f78af36d-0cfb-438d-9763-cff2b46f13f7\") " pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:55 crc kubenswrapper[4660]: I0129 12:09:55.656061 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.087195 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trzg2"] Jan 29 12:09:56 crc kubenswrapper[4660]: W0129 12:09:56.090626 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78af36d_0cfb_438d_9763_cff2b46f13f7.slice/crio-2a351bbb0b4565aa5b0581b9b8f618fa8824a981a7a7d15c1b9726feddb35f22 WatchSource:0}: Error finding container 2a351bbb0b4565aa5b0581b9b8f618fa8824a981a7a7d15c1b9726feddb35f22: Status 404 returned error can't find the container with id 2a351bbb0b4565aa5b0581b9b8f618fa8824a981a7a7d15c1b9726feddb35f22 Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.231293 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" event={"ID":"1c8d8060-f4a5-4810-9d4c-df286281c993","Type":"ContainerStarted","Data":"7b424647719358be44061cf2f7a20f641e27adc142039244a1e2c156695c974b"} Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.231622 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" event={"ID":"1c8d8060-f4a5-4810-9d4c-df286281c993","Type":"ContainerStarted","Data":"d854a4d5ff063818e699160c595a766d059125af1a0fd3c12d7cfa78bdfbd5d5"} Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.232180 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.237708 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.238341 4660 generic.go:334] "Generic (PLEG): container finished" podID="f78af36d-0cfb-438d-9763-cff2b46f13f7" containerID="d2e6085a43551b024765cb96ebaf83620803d2a0fb5a95059604bea43a2cb9d2" exitCode=0 Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.238422 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trzg2" event={"ID":"f78af36d-0cfb-438d-9763-cff2b46f13f7","Type":"ContainerDied","Data":"d2e6085a43551b024765cb96ebaf83620803d2a0fb5a95059604bea43a2cb9d2"} Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.238486 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trzg2" event={"ID":"f78af36d-0cfb-438d-9763-cff2b46f13f7","Type":"ContainerStarted","Data":"2a351bbb0b4565aa5b0581b9b8f618fa8824a981a7a7d15c1b9726feddb35f22"} Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.251209 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64ff76bc74-rdppq" podStartSLOduration=3.251188029 podStartE2EDuration="3.251188029s" podCreationTimestamp="2026-01-29 12:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:56.246895937 +0000 UTC m=+233.469838079" watchObservedRunningTime="2026-01-29 12:09:56.251188029 +0000 UTC m=+233.474130161" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.318214 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxjhk"] Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.319367 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.323221 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.327364 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxjhk"] Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.432708 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-utilities\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.432766 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-catalog-content\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.432792 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrs4\" (UniqueName: \"kubernetes.io/projected/4f109fce-f7d8-4f49-970d-14950db78713-kube-api-access-dsrs4\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.533900 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-utilities\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.534397 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-utilities\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.534003 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-catalog-content\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.534780 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrs4\" (UniqueName: \"kubernetes.io/projected/4f109fce-f7d8-4f49-970d-14950db78713-kube-api-access-dsrs4\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.535115 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f109fce-f7d8-4f49-970d-14950db78713-catalog-content\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.557553 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrs4\" (UniqueName: \"kubernetes.io/projected/4f109fce-f7d8-4f49-970d-14950db78713-kube-api-access-dsrs4\") pod \"community-operators-kxjhk\" (UID: \"4f109fce-f7d8-4f49-970d-14950db78713\") " pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.597032 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9d77d69bf-m2h5g"] Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.597647 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.600083 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.600391 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.600492 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.600600 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.600933 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.601080 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.608669 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.615944 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d77d69bf-m2h5g"] Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.649551 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.736908 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-proxy-ca-bundles\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.737229 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-serving-cert\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.737270 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-config\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.737301 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqww\" (UniqueName: \"kubernetes.io/projected/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-kube-api-access-bbqww\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.737351 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-client-ca\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.838475 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-proxy-ca-bundles\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.838547 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-serving-cert\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.838594 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-config\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.838613 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqww\" (UniqueName: \"kubernetes.io/projected/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-kube-api-access-bbqww\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.838659 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-client-ca\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.839402 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-proxy-ca-bundles\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.839449 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-client-ca\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.839897 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-config\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.853613 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-serving-cert\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.865385 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqww\" (UniqueName: \"kubernetes.io/projected/c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91-kube-api-access-bbqww\") pod \"controller-manager-9d77d69bf-m2h5g\" (UID: \"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91\") " pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:56 crc kubenswrapper[4660]: I0129 12:09:56.915811 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.043163 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxjhk"] Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.248076 4660 generic.go:334] "Generic (PLEG): container finished" podID="4f109fce-f7d8-4f49-970d-14950db78713" containerID="c3534b82b93ab741fa2adbd3a024a1c64daaac55ddf14f343db3db3464b37ca4" exitCode=0 Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.248153 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxjhk" event={"ID":"4f109fce-f7d8-4f49-970d-14950db78713","Type":"ContainerDied","Data":"c3534b82b93ab741fa2adbd3a024a1c64daaac55ddf14f343db3db3464b37ca4"} Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.248221 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxjhk" event={"ID":"4f109fce-f7d8-4f49-970d-14950db78713","Type":"ContainerStarted","Data":"c8a1a32f17bd71bbd44b2237885c0e76fc7dbb96bffbebd75e2d5ac28431c8ec"} Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.251663 4660 generic.go:334] "Generic (PLEG): container finished" podID="f78af36d-0cfb-438d-9763-cff2b46f13f7" containerID="973df592da8e5cf577ca71ad1fd1cea5382ad1015dd2b6359ffd49c47992f9ca" exitCode=0 Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.252295 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trzg2" event={"ID":"f78af36d-0cfb-438d-9763-cff2b46f13f7","Type":"ContainerDied","Data":"973df592da8e5cf577ca71ad1fd1cea5382ad1015dd2b6359ffd49c47992f9ca"} Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.363428 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d77d69bf-m2h5g"] Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.733798 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7lkhw"] Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.736076 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.745314 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.777487 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lkhw"] Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.856072 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-catalog-content\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.856113 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dg64\" (UniqueName: \"kubernetes.io/projected/1b24d899-bb6c-465e-8e0f-594f8581b035-kube-api-access-5dg64\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.856154 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-utilities\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.957287 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-utilities\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.957372 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-catalog-content\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.957404 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dg64\" (UniqueName: \"kubernetes.io/projected/1b24d899-bb6c-465e-8e0f-594f8581b035-kube-api-access-5dg64\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.957744 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-utilities\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.957808 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b24d899-bb6c-465e-8e0f-594f8581b035-catalog-content\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:57 crc kubenswrapper[4660]: I0129 12:09:57.979563 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dg64\" (UniqueName: \"kubernetes.io/projected/1b24d899-bb6c-465e-8e0f-594f8581b035-kube-api-access-5dg64\") pod \"redhat-marketplace-7lkhw\" (UID: \"1b24d899-bb6c-465e-8e0f-594f8581b035\") " pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.081140 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.268848 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trzg2" event={"ID":"f78af36d-0cfb-438d-9763-cff2b46f13f7","Type":"ContainerStarted","Data":"c9f23611bee01f6f8d8af4a3ca4e508fc8780e8857a3e014e4a18c580f6e6e6d"} Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.274469 4660 generic.go:334] "Generic (PLEG): container finished" podID="4f109fce-f7d8-4f49-970d-14950db78713" containerID="e906fc36ba63a72ac35e35be26e19e4bdb7e7f9235932d9896f3ab64b8f1847f" exitCode=0 Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.274584 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxjhk" event={"ID":"4f109fce-f7d8-4f49-970d-14950db78713","Type":"ContainerDied","Data":"e906fc36ba63a72ac35e35be26e19e4bdb7e7f9235932d9896f3ab64b8f1847f"} Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.287596 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" event={"ID":"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91","Type":"ContainerStarted","Data":"befb807449588a14e461476552d74fee3d1ea11aa2dee3ec7f78271f3b3e6aa3"} Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.287629 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" event={"ID":"c4c607b6-24c9-4d1b-b7c6-42d2ff32cd91","Type":"ContainerStarted","Data":"3c617078e6349563b990e8f8edf728c8e51bdf31a29c662631b10a91c3709458"} Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.287646 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.290435 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trzg2" podStartSLOduration=1.796747231 podStartE2EDuration="3.290416212s" podCreationTimestamp="2026-01-29 12:09:55 +0000 UTC" firstStartedPulling="2026-01-29 12:09:56.239978562 +0000 UTC m=+233.462920694" lastFinishedPulling="2026-01-29 12:09:57.733647543 +0000 UTC m=+234.956589675" observedRunningTime="2026-01-29 12:09:58.286682356 +0000 UTC m=+235.509624508" watchObservedRunningTime="2026-01-29 12:09:58.290416212 +0000 UTC m=+235.513358344" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.293829 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.548990 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9d77d69bf-m2h5g" podStartSLOduration=5.548972056 podStartE2EDuration="5.548972056s" podCreationTimestamp="2026-01-29 12:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:58.324034383 +0000 UTC m=+235.546976515" watchObservedRunningTime="2026-01-29 12:09:58.548972056 +0000 UTC m=+235.771914188" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.549722 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lkhw"] Jan 29 12:09:58 crc kubenswrapper[4660]: W0129 12:09:58.556557 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b24d899_bb6c_465e_8e0f_594f8581b035.slice/crio-33e269380036aaedceab28a7287beb843eb44ac5985029f4249712bf46f7d3f8 WatchSource:0}: Error finding container 33e269380036aaedceab28a7287beb843eb44ac5985029f4249712bf46f7d3f8: Status 404 returned error can't find the container with id 33e269380036aaedceab28a7287beb843eb44ac5985029f4249712bf46f7d3f8 Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.716309 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gfh45"] Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.717971 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.722599 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.729982 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gfh45"] Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.767114 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pqp\" (UniqueName: \"kubernetes.io/projected/c5f2d9bf-38d4-4484-a429-37373a55db37-kube-api-access-w6pqp\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.767191 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-utilities\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.767245 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-catalog-content\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.867909 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6pqp\" (UniqueName: \"kubernetes.io/projected/c5f2d9bf-38d4-4484-a429-37373a55db37-kube-api-access-w6pqp\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.868210 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-utilities\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.868344 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-catalog-content\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.868659 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-utilities\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.868757 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f2d9bf-38d4-4484-a429-37373a55db37-catalog-content\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:58 crc kubenswrapper[4660]: I0129 12:09:58.887230 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6pqp\" (UniqueName: \"kubernetes.io/projected/c5f2d9bf-38d4-4484-a429-37373a55db37-kube-api-access-w6pqp\") pod \"redhat-operators-gfh45\" (UID: \"c5f2d9bf-38d4-4484-a429-37373a55db37\") " pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.057587 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.297842 4660 generic.go:334] "Generic (PLEG): container finished" podID="1b24d899-bb6c-465e-8e0f-594f8581b035" containerID="0dca58d49f513404681bc3bdaeaba957065097c1475c15063375e6df9225dc8b" exitCode=0 Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.297960 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lkhw" event={"ID":"1b24d899-bb6c-465e-8e0f-594f8581b035","Type":"ContainerDied","Data":"0dca58d49f513404681bc3bdaeaba957065097c1475c15063375e6df9225dc8b"} Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.298777 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lkhw" event={"ID":"1b24d899-bb6c-465e-8e0f-594f8581b035","Type":"ContainerStarted","Data":"33e269380036aaedceab28a7287beb843eb44ac5985029f4249712bf46f7d3f8"} Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.301525 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxjhk" event={"ID":"4f109fce-f7d8-4f49-970d-14950db78713","Type":"ContainerStarted","Data":"7fd196cb74752c11dfa9f23698faff35a02e15320ebd1a9b2926b45e2a7b2ebf"} Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.335779 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxjhk" podStartSLOduration=1.83048468 podStartE2EDuration="3.33575841s" podCreationTimestamp="2026-01-29 12:09:56 +0000 UTC" firstStartedPulling="2026-01-29 12:09:57.249093706 +0000 UTC m=+234.472035828" lastFinishedPulling="2026-01-29 12:09:58.754367426 +0000 UTC m=+235.977309558" observedRunningTime="2026-01-29 12:09:59.334402382 +0000 UTC m=+236.557344514" watchObservedRunningTime="2026-01-29 12:09:59.33575841 +0000 UTC m=+236.558700542" Jan 29 12:09:59 crc kubenswrapper[4660]: I0129 12:09:59.554659 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gfh45"] Jan 29 12:10:00 crc kubenswrapper[4660]: I0129 12:10:00.308509 4660 generic.go:334] "Generic (PLEG): container finished" podID="c5f2d9bf-38d4-4484-a429-37373a55db37" containerID="c49b550466a7b9ea09c619fbb320de3ee9945cad5aaa699b74e7daee07593eb9" exitCode=0 Jan 29 12:10:00 crc kubenswrapper[4660]: I0129 12:10:00.309147 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfh45" event={"ID":"c5f2d9bf-38d4-4484-a429-37373a55db37","Type":"ContainerDied","Data":"c49b550466a7b9ea09c619fbb320de3ee9945cad5aaa699b74e7daee07593eb9"} Jan 29 12:10:00 crc kubenswrapper[4660]: I0129 12:10:00.309250 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfh45" event={"ID":"c5f2d9bf-38d4-4484-a429-37373a55db37","Type":"ContainerStarted","Data":"37da66e022de9dabe019499ce6f2427cc12a7fefa3e457ae8e3030de27580d34"} Jan 29 12:10:00 crc kubenswrapper[4660]: I0129 12:10:00.315357 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lkhw" event={"ID":"1b24d899-bb6c-465e-8e0f-594f8581b035","Type":"ContainerStarted","Data":"d295e06c81908d9ae9e0d9ccb1b22da1e5dc5c79378b1953fa5c8f802f745c7d"} Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.322657 4660 generic.go:334] "Generic (PLEG): container finished" podID="1b24d899-bb6c-465e-8e0f-594f8581b035" containerID="d295e06c81908d9ae9e0d9ccb1b22da1e5dc5c79378b1953fa5c8f802f745c7d" exitCode=0 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.322730 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lkhw" event={"ID":"1b24d899-bb6c-465e-8e0f-594f8581b035","Type":"ContainerDied","Data":"d295e06c81908d9ae9e0d9ccb1b22da1e5dc5c79378b1953fa5c8f802f745c7d"} Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.326283 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfh45" event={"ID":"c5f2d9bf-38d4-4484-a429-37373a55db37","Type":"ContainerStarted","Data":"7eaa500c86515859487699552be3289a8050c3cb4d18758bd4c48075c01975cc"} Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.573805 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574361 4660 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574586 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7" gracePeriod=15 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574652 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7" gracePeriod=15 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574684 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3" gracePeriod=15 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574683 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574701 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100" gracePeriod=15 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.574684 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf" gracePeriod=15 Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575796 4660 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575901 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575911 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575918 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575924 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575931 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575937 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575948 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575955 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575960 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575965 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575974 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575980 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.575988 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.575993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576079 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576089 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576096 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576103 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576113 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.576119 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.625334 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638137 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638378 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638503 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638644 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638805 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.638920 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.639019 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.639125 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740185 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740302 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740633 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740652 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740678 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740724 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740784 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740792 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740814 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740822 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740840 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740891 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740934 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.740982 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.741026 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.741141 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: I0129 12:10:01.913704 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:01 crc kubenswrapper[4660]: E0129 12:10:01.951822 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.146:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3268424deff9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:10:01.950285817 +0000 UTC m=+239.173227949,LastTimestamp:2026-01-29 12:10:01.950285817 +0000 UTC m=+239.173227949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.086065 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.086124 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.333644 4660 generic.go:334] "Generic (PLEG): container finished" podID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" containerID="57c91eeb2c6becb60fe65a233537c09a6deb02f1e4f2586cd691df2cd73991d9" exitCode=0 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.333747 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac6d365e-6112-4542-9b4f-5f5ac1227bb4","Type":"ContainerDied","Data":"57c91eeb2c6becb60fe65a233537c09a6deb02f1e4f2586cd691df2cd73991d9"} Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.335552 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.335830 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.336154 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.337763 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156"} Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.337792 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0502727e8fce69c12f44f44c1d9792d826b68ca052a661c2780ba827dd398cdc"} Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.338627 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.338887 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.339978 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.342019 4660 generic.go:334] "Generic (PLEG): container finished" podID="c5f2d9bf-38d4-4484-a429-37373a55db37" containerID="7eaa500c86515859487699552be3289a8050c3cb4d18758bd4c48075c01975cc" exitCode=0 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.342073 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfh45" event={"ID":"c5f2d9bf-38d4-4484-a429-37373a55db37","Type":"ContainerDied","Data":"7eaa500c86515859487699552be3289a8050c3cb4d18758bd4c48075c01975cc"} Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.342949 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.343375 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.344571 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.344822 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.347482 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.349199 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.351599 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7" exitCode=0 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.351623 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100" exitCode=0 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.351632 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf" exitCode=0 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.351640 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3" exitCode=2 Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.351678 4660 scope.go:117] "RemoveContainer" containerID="68c08bc9cf2862ddea7bce78b7b584bd17a58f3b1f02ffe9b00f681352988af8" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.756335 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.756926 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.757175 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.757434 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.757640 4660 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:02 crc kubenswrapper[4660]: I0129 12:10:02.757657 4660 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.757867 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="200ms" Jan 29 12:10:02 crc kubenswrapper[4660]: E0129 12:10:02.977641 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="400ms" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.357440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lkhw" event={"ID":"1b24d899-bb6c-465e-8e0f-594f8581b035","Type":"ContainerStarted","Data":"012d19a229cd8a9f3d90db49270017d6a5f21e7178bba4273ab281eff886414e"} Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.358826 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.359061 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.359331 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.359837 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.359954 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gfh45" event={"ID":"c5f2d9bf-38d4-4484-a429-37373a55db37","Type":"ContainerStarted","Data":"947081487b95ea680ac24e3d7a6065ea09eed9b3676bf7e7d9b9acdba0fca908"} Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.360241 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.360546 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.360840 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.361150 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.361600 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.361939 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.363019 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:10:03 crc kubenswrapper[4660]: E0129 12:10:03.378529 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="800ms" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.480121 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.480807 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.481010 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.481172 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.481344 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.956836 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.957889 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.958362 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.958604 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:03 crc kubenswrapper[4660]: I0129 12:10:03.958803 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.093306 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access\") pod \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094276 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock\") pod \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094341 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir\") pod \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\" (UID: \"ac6d365e-6112-4542-9b4f-5f5ac1227bb4\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094410 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock" (OuterVolumeSpecName: "var-lock") pod "ac6d365e-6112-4542-9b4f-5f5ac1227bb4" (UID: "ac6d365e-6112-4542-9b4f-5f5ac1227bb4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094543 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ac6d365e-6112-4542-9b4f-5f5ac1227bb4" (UID: "ac6d365e-6112-4542-9b4f-5f5ac1227bb4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094786 4660 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.094805 4660 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.100855 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ac6d365e-6112-4542-9b4f-5f5ac1227bb4" (UID: "ac6d365e-6112-4542-9b4f-5f5ac1227bb4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: E0129 12:10:04.178987 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="1.6s" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.195525 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac6d365e-6112-4542-9b4f-5f5ac1227bb4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.370026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ac6d365e-6112-4542-9b4f-5f5ac1227bb4","Type":"ContainerDied","Data":"ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4"} Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.370268 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3edf91cb5f5656147328398d4ae4873cc6b08eeedb084efc1956aaca2cfbe4" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.370076 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.372791 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.373410 4660 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7" exitCode=0 Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.381183 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.381594 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.381878 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.382116 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.569393 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.570027 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.570432 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.570672 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.570974 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.571328 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.571561 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.703424 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.703490 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.703576 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.703797 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.704023 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.704038 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.804789 4660 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.804820 4660 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:04 crc kubenswrapper[4660]: I0129 12:10:04.804829 4660 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.380318 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.380922 4660 scope.go:117] "RemoveContainer" containerID="9fd051e5dbd004cdf2feefa9bfb67d61c27322f431a8521f605af8baaec3b8c7" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.380973 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.394304 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.394913 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.395275 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.395514 4660 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.395778 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.396341 4660 scope.go:117] "RemoveContainer" containerID="014137a3e7bb614b7ddb11d812c6ab184b793c7b35b40e156f9a5a115a3a2100" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.408309 4660 scope.go:117] "RemoveContainer" containerID="09058a82001184dfe07ffbae465ce7190096bbc0ffa68f3aba41ab446af486cf" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.421473 4660 scope.go:117] "RemoveContainer" containerID="578adf269d0685c227a3582f8355c46c97c10af095a3fe6c4b30636d54cd74d3" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.432555 4660 scope.go:117] "RemoveContainer" containerID="3e49ddf223b692bcc2e24549b5a5192922cd71bd95a78b5c134363a88821c8d7" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.447907 4660 scope.go:117] "RemoveContainer" containerID="41bdc70652e0e5ad6071d731623cff386c2fff63a145e90e73bd77ed3cdd3b5a" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.477683 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.656246 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.656301 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.704844 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.706846 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.707510 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.707941 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.708161 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: I0129 12:10:05.708341 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:05 crc kubenswrapper[4660]: E0129 12:10:05.779747 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="3.2s" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.423647 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trzg2" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.424461 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.424870 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.425143 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.425416 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.425673 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.651419 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.651460 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.706191 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.706812 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.707306 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.707756 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.708025 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.708324 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: I0129 12:10:06.708601 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:06 crc kubenswrapper[4660]: E0129 12:10:06.744244 4660 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.146:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3268424deff9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:10:01.950285817 +0000 UTC m=+239.173227949,LastTimestamp:2026-01-29 12:10:01.950285817 +0000 UTC m=+239.173227949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.435017 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxjhk" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.435526 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.435812 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.436162 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.436435 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.436768 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:07 crc kubenswrapper[4660]: I0129 12:10:07.437082 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.082243 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.082540 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.133782 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.134349 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.134805 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.135130 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.135356 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.135619 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.135881 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.441465 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7lkhw" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.442009 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.442472 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.442787 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.443184 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.443527 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: I0129 12:10:08.443845 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:08 crc kubenswrapper[4660]: E0129 12:10:08.981133 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="6.4s" Jan 29 12:10:09 crc kubenswrapper[4660]: I0129 12:10:09.058222 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:10:09 crc kubenswrapper[4660]: I0129 12:10:09.058276 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:10:10 crc kubenswrapper[4660]: I0129 12:10:10.100947 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gfh45" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" containerName="registry-server" probeResult="failure" output=< Jan 29 12:10:10 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Jan 29 12:10:10 crc kubenswrapper[4660]: > Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.469618 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.471142 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.471569 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.472060 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.472534 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.473024 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.473270 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.486197 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.486235 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:12 crc kubenswrapper[4660]: E0129 12:10:12.486636 4660 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:12 crc kubenswrapper[4660]: I0129 12:10:12.487253 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.428305 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4c001da9704e27c03afc1968254607cebc76da8367f12baaf158f7e894e7d487"} Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.428701 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a430425c116ac59035ffa25bb82ac86eeb9a194647ff27d8982fea8bc4ab7d9b"} Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.428929 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.428941 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.429521 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: E0129 12:10:13.429740 4660 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.430131 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.430431 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.430931 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.431126 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.431347 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.475821 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.476822 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.477085 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.477385 4660 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.477658 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.477956 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:13 crc kubenswrapper[4660]: I0129 12:10:13.478138 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.433828 4660 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4c001da9704e27c03afc1968254607cebc76da8367f12baaf158f7e894e7d487" exitCode=0 Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.433870 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4c001da9704e27c03afc1968254607cebc76da8367f12baaf158f7e894e7d487"} Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.434114 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.434126 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.434423 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: E0129 12:10:14.434426 4660 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.434680 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.435082 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.435441 4660 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.435634 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.435835 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:14 crc kubenswrapper[4660]: I0129 12:10:14.435987 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.286036 4660 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.286148 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: E0129 12:10:15.382173 4660 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.146:6443: connect: connection refused" interval="7s" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.460936 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1bc33d2d26489e10f5209419e996ae0aeb8aad48fb9e04d5307db7f275b9dce"} Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.463290 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.463326 4660 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea" exitCode=1 Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.463349 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea"} Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.463840 4660 scope.go:117] "RemoveContainer" containerID="c8f2bdecca8f0a4aadbf16f8256b6ac831a31f097097d420e8a673ebaa5a13ea" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.464032 4660 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.464260 4660 status_manager.go:851] "Failed to get status for pod" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.464524 4660 status_manager.go:851] "Failed to get status for pod" podUID="f78af36d-0cfb-438d-9763-cff2b46f13f7" pod="openshift-marketplace/certified-operators-trzg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-trzg2\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.464752 4660 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.464946 4660 status_manager.go:851] "Failed to get status for pod" podUID="c5f2d9bf-38d4-4484-a429-37373a55db37" pod="openshift-marketplace/redhat-operators-gfh45" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gfh45\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.465139 4660 status_manager.go:851] "Failed to get status for pod" podUID="4f109fce-f7d8-4f49-970d-14950db78713" pod="openshift-marketplace/community-operators-kxjhk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxjhk\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.465341 4660 status_manager.go:851] "Failed to get status for pod" podUID="1b24d899-bb6c-465e-8e0f-594f8581b035" pod="openshift-marketplace/redhat-marketplace-7lkhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7lkhw\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:15 crc kubenswrapper[4660]: I0129 12:10:15.465520 4660 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.146:6443: connect: connection refused" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.274944 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerName="oauth-openshift" containerID="cri-o://52a2b7c906812812b108af6298e6aa9a34c47711e2e7a368db28187799b33d27" gracePeriod=15 Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.296872 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.491822 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0a79dbfc003c9c596197840a1d19d7955772c5a85b84adfa5a89de728ecfa18c"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492217 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"027acd5a4e7305dc101030cb71d3ccf08c9a4853cc6cbbc84efa2f854ed9ef86"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492238 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5e36088369a398f78fa23ceecab906db44d45bc1afd8d7d9aa3db8ce8bd062d"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492250 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7b330b5ef8e3be7291f231fab5c60f0ee6fd204d814d6f10c3d6831b7130b538"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492278 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492307 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.492309 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.496611 4660 generic.go:334] "Generic (PLEG): container finished" podID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerID="52a2b7c906812812b108af6298e6aa9a34c47711e2e7a368db28187799b33d27" exitCode=0 Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.496672 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" event={"ID":"dde2be07-f4f9-4868-801f-4a0b650a5b7f","Type":"ContainerDied","Data":"52a2b7c906812812b108af6298e6aa9a34c47711e2e7a368db28187799b33d27"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.508278 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.508350 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f18ba97968c29ea4c611a8e84bb78c7cf481adb13097cb3946ef5032ebd442c"} Jan 29 12:10:16 crc kubenswrapper[4660]: I0129 12:10:16.906559 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049469 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049550 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049575 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049598 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049625 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049672 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049716 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049781 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049808 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049828 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049857 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2bk\" (UniqueName: \"kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049887 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049912 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.049944 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data\") pod \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\" (UID: \"dde2be07-f4f9-4868-801f-4a0b650a5b7f\") " Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.050988 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.051422 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.051798 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.051857 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.053430 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.058078 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.059297 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.076553 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.077850 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk" (OuterVolumeSpecName: "kube-api-access-rd2bk") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "kube-api-access-rd2bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.078799 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.079038 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.079041 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.080197 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.080542 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "dde2be07-f4f9-4868-801f-4a0b650a5b7f" (UID: "dde2be07-f4f9-4868-801f-4a0b650a5b7f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151279 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151310 4660 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151320 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151333 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151342 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151350 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151361 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151372 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151380 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151388 4660 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dde2be07-f4f9-4868-801f-4a0b650a5b7f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151396 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd2bk\" (UniqueName: \"kubernetes.io/projected/dde2be07-f4f9-4868-801f-4a0b650a5b7f-kube-api-access-rd2bk\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151406 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151415 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.151424 4660 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dde2be07-f4f9-4868-801f-4a0b650a5b7f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.487467 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.487843 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.514175 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" event={"ID":"dde2be07-f4f9-4868-801f-4a0b650a5b7f","Type":"ContainerDied","Data":"b85c0d1c9f9e935e9083d37180ae66f740082e94484439d2f38e10ea0389753c"} Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.514195 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-h994k" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.514245 4660 scope.go:117] "RemoveContainer" containerID="52a2b7c906812812b108af6298e6aa9a34c47711e2e7a368db28187799b33d27" Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.543159 4660 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]log ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]etcd ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-filter ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-apiextensions-informers ok Jan 29 12:10:17 crc kubenswrapper[4660]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/crd-informer-synced ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-system-namespaces-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 29 12:10:17 crc kubenswrapper[4660]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 29 12:10:17 crc kubenswrapper[4660]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/bootstrap-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/start-kube-aggregator-informers ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-registration-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-discovery-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]autoregister-completion ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-openapi-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 29 12:10:17 crc kubenswrapper[4660]: livez check failed Jan 29 12:10:17 crc kubenswrapper[4660]: I0129 12:10:17.543219 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:10:19 crc kubenswrapper[4660]: I0129 12:10:19.100647 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:10:19 crc kubenswrapper[4660]: I0129 12:10:19.142391 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gfh45" Jan 29 12:10:22 crc kubenswrapper[4660]: I0129 12:10:22.310884 4660 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:22 crc kubenswrapper[4660]: I0129 12:10:22.470202 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d659bb93-29c0-4245-beac-40d03d44f91b" Jan 29 12:10:22 crc kubenswrapper[4660]: I0129 12:10:22.549817 4660 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:22 crc kubenswrapper[4660]: I0129 12:10:22.549848 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3b3a80cd-1010-4777-867a-4f5b9e8c6a34" Jan 29 12:10:22 crc kubenswrapper[4660]: I0129 12:10:22.553244 4660 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d659bb93-29c0-4245-beac-40d03d44f91b" Jan 29 12:10:25 crc kubenswrapper[4660]: I0129 12:10:25.285001 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:10:26 crc kubenswrapper[4660]: I0129 12:10:26.296723 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:10:26 crc kubenswrapper[4660]: I0129 12:10:26.300720 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:10:26 crc kubenswrapper[4660]: I0129 12:10:26.576547 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:10:32 crc kubenswrapper[4660]: I0129 12:10:32.194938 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 12:10:32 crc kubenswrapper[4660]: I0129 12:10:32.582397 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 12:10:33 crc kubenswrapper[4660]: I0129 12:10:33.551737 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 12:10:33 crc kubenswrapper[4660]: I0129 12:10:33.576963 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.125547 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.198325 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.207420 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.386544 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.387755 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.514305 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.569968 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.622419 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.635241 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.646646 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.667630 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.813885 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 12:10:34 crc kubenswrapper[4660]: I0129 12:10:34.963749 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.113026 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.130941 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.411440 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.533877 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.633964 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.644084 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.677008 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.875181 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 12:10:35 crc kubenswrapper[4660]: I0129 12:10:35.956817 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.090222 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.182518 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.248859 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.251857 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.311111 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.327910 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.359750 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.408495 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.452318 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.540207 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.617860 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.790581 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.795278 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.847113 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 12:10:36 crc kubenswrapper[4660]: I0129 12:10:36.902176 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.021941 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.022733 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.026155 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.132285 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.222340 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.246737 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.370767 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.712752 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.757610 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.843203 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.877326 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.884794 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.967956 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 12:10:37 crc kubenswrapper[4660]: I0129 12:10:37.988235 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.004982 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.028291 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.049623 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.272842 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.274518 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.399080 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.417482 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.436656 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.471394 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.559603 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.578488 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.624381 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.650790 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.704574 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.732007 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.772448 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.778665 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.788133 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.816089 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.965990 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 12:10:38 crc kubenswrapper[4660]: I0129 12:10:38.966172 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.050201 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.051035 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.073665 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.109808 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.132671 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.162049 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.189021 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.237484 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.390901 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.394445 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.401606 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.460459 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.465638 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.496300 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.573918 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.602554 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.632076 4660 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.652849 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.678389 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.797850 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.889443 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.894433 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.915583 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.979441 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 12:10:39 crc kubenswrapper[4660]: I0129 12:10:39.982807 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.016457 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.095243 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.176917 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.195185 4660 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.254553 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.262364 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.290500 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.299599 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.400394 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.401311 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.420414 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.422921 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.459906 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.480280 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.518031 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.626676 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.840444 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 12:10:40 crc kubenswrapper[4660]: I0129 12:10:40.871382 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.038648 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.268621 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.285244 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.377922 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.380742 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.498976 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.682849 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.687424 4660 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.688194 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gfh45" podStartSLOduration=40.900539105 podStartE2EDuration="43.688174217s" podCreationTimestamp="2026-01-29 12:09:58 +0000 UTC" firstStartedPulling="2026-01-29 12:10:00.31039014 +0000 UTC m=+237.533332272" lastFinishedPulling="2026-01-29 12:10:03.098025252 +0000 UTC m=+240.320967384" observedRunningTime="2026-01-29 12:10:22.408772857 +0000 UTC m=+259.631715009" watchObservedRunningTime="2026-01-29 12:10:41.688174217 +0000 UTC m=+278.911116349" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.690453 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7lkhw" podStartSLOduration=41.749247675 podStartE2EDuration="44.690430522s" podCreationTimestamp="2026-01-29 12:09:57 +0000 UTC" firstStartedPulling="2026-01-29 12:09:59.299647589 +0000 UTC m=+236.522589721" lastFinishedPulling="2026-01-29 12:10:02.240830436 +0000 UTC m=+239.463772568" observedRunningTime="2026-01-29 12:10:22.465008798 +0000 UTC m=+259.687950930" watchObservedRunningTime="2026-01-29 12:10:41.690430522 +0000 UTC m=+278.913372654" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.691388 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.692779 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.69275973 podStartE2EDuration="40.69275973s" podCreationTimestamp="2026-01-29 12:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:22.377249756 +0000 UTC m=+259.600191888" watchObservedRunningTime="2026-01-29 12:10:41.69275973 +0000 UTC m=+278.915701862" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.693526 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-h994k","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.693570 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.697448 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.707813 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.707798924 podStartE2EDuration="19.707798924s" podCreationTimestamp="2026-01-29 12:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:41.706782635 +0000 UTC m=+278.929724767" watchObservedRunningTime="2026-01-29 12:10:41.707798924 +0000 UTC m=+278.930741056" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.771786 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 12:10:41 crc kubenswrapper[4660]: I0129 12:10:41.910578 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.071757 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.092675 4660 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.117466 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.181171 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.333365 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.342518 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.408003 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.413403 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.480486 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.508475 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.542802 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.574704 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.598654 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.657567 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.728044 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.872097 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.902081 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.992144 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 12:10:42 crc kubenswrapper[4660]: I0129 12:10:42.999844 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.064610 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.154963 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.248115 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.302134 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.317198 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.351035 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.413489 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.472423 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.476123 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" path="/var/lib/kubelet/pods/dde2be07-f4f9-4868-801f-4a0b650a5b7f/volumes" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.489256 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.611852 4660 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.702108 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.708151 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.734983 4660 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.735229 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156" gracePeriod=5 Jan 29 12:10:43 crc kubenswrapper[4660]: I0129 12:10:43.922622 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:43.959809 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.031076 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.044780 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.047741 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.056306 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.164367 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.166767 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.171827 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.177662 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.209861 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.221559 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.246970 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.299281 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.323794 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.362710 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.382357 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.413868 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.452433 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.473582 4660 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.614239 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.652673 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.658478 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.680191 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.686503 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.691175 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.699124 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.714822 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.730164 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.761722 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.810422 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.848784 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.898282 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 12:10:44 crc kubenswrapper[4660]: I0129 12:10:44.954683 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.053270 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.094233 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.159871 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.189995 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.199896 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.472574 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.477834 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.568132 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.571374 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.669517 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.733656 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 12:10:45 crc kubenswrapper[4660]: I0129 12:10:45.763322 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.037236 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.165577 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.166969 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.242122 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.343227 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.372873 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.381257 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.460407 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.547964 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.724719 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.874623 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 12:10:46 crc kubenswrapper[4660]: I0129 12:10:46.908001 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.000867 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.046898 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.154316 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.206942 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.244996 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.246708 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.261522 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.527219 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.554045 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.587912 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.654965 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.661999 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 12:10:47 crc kubenswrapper[4660]: I0129 12:10:47.748731 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 12:10:48 crc kubenswrapper[4660]: I0129 12:10:48.047418 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 12:10:48 crc kubenswrapper[4660]: I0129 12:10:48.220377 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:10:48 crc kubenswrapper[4660]: I0129 12:10:48.392848 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.315573 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.316032 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424072 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424149 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424171 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424196 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424238 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424321 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424321 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424340 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424388 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424466 4660 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424483 4660 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424495 4660 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.424508 4660 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.432186 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.479195 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.479931 4660 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.490903 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.490938 4660 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="51af7b00-a138-4955-9c3d-a1180ed60fb6" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.495143 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.495192 4660 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="51af7b00-a138-4955-9c3d-a1180ed60fb6" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.509898 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.525352 4660 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.682946 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.683004 4660 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156" exitCode=137 Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.683049 4660 scope.go:117] "RemoveContainer" containerID="f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.683270 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.697818 4660 scope.go:117] "RemoveContainer" containerID="f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156" Jan 29 12:10:49 crc kubenswrapper[4660]: E0129 12:10:49.698282 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156\": container with ID starting with f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156 not found: ID does not exist" containerID="f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156" Jan 29 12:10:49 crc kubenswrapper[4660]: I0129 12:10:49.698323 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156"} err="failed to get container status \"f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156\": rpc error: code = NotFound desc = could not find container \"f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156\": container with ID starting with f951a7303a93d989d686bd0514db903a53dafd1bd00e34690f6bfc1821340156 not found: ID does not exist" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.010745 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640121 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-gp9pj"] Jan 29 12:10:50 crc kubenswrapper[4660]: E0129 12:10:50.640306 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640317 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:50 crc kubenswrapper[4660]: E0129 12:10:50.640333 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" containerName="installer" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640339 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" containerName="installer" Jan 29 12:10:50 crc kubenswrapper[4660]: E0129 12:10:50.640350 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerName="oauth-openshift" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640356 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerName="oauth-openshift" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640453 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde2be07-f4f9-4868-801f-4a0b650a5b7f" containerName="oauth-openshift" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640467 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6d365e-6112-4542-9b4f-5f5ac1227bb4" containerName="installer" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640474 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.640857 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.646170 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.646552 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.646603 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.647096 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.647368 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.647582 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.647671 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.648832 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.649092 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.649440 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.651809 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.654456 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.654478 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.659494 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.662500 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-gp9pj"] Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.671465 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.689910 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740323 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740378 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740494 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740570 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740627 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740769 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-policies\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740818 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740872 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740903 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740956 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bw5\" (UniqueName: \"kubernetes.io/projected/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-kube-api-access-q8bw5\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.740991 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.741050 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.741113 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.741205 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-dir\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.778733 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.842875 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-policies\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.842929 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.842960 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.842984 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843027 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bw5\" (UniqueName: \"kubernetes.io/projected/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-kube-api-access-q8bw5\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843051 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843074 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843109 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843152 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-dir\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843180 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843207 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843235 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843265 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.843293 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.844576 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.844620 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-policies\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.844662 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-audit-dir\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.844682 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.844725 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.849426 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.849475 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.849771 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.850033 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.850187 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.850370 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.851035 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.852681 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.863000 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bw5\" (UniqueName: \"kubernetes.io/projected/fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e-kube-api-access-q8bw5\") pod \"oauth-openshift-7484f6b95f-gp9pj\" (UID: \"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:50 crc kubenswrapper[4660]: I0129 12:10:50.963552 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:51 crc kubenswrapper[4660]: I0129 12:10:51.352073 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-gp9pj"] Jan 29 12:10:51 crc kubenswrapper[4660]: I0129 12:10:51.697282 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" event={"ID":"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e","Type":"ContainerStarted","Data":"95a74e5edf84c5b072dfe8a8bee0c11f910b54c2b75d6fcd2abbe9b62df80a50"} Jan 29 12:10:51 crc kubenswrapper[4660]: I0129 12:10:51.697788 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" event={"ID":"fe948d25-d88e-4c7b-9b18-73ccfd3c1a6e","Type":"ContainerStarted","Data":"e65010db1a021291e8f18809088d9c0f8dc3eeca70ff056e9fb6ecec8452bedc"} Jan 29 12:10:51 crc kubenswrapper[4660]: I0129 12:10:51.697829 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:10:51 crc kubenswrapper[4660]: I0129 12:10:51.721001 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" podStartSLOduration=60.720976641 podStartE2EDuration="1m0.720976641s" podCreationTimestamp="2026-01-29 12:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:51.716633455 +0000 UTC m=+288.939575597" watchObservedRunningTime="2026-01-29 12:10:51.720976641 +0000 UTC m=+288.943918793" Jan 29 12:10:52 crc kubenswrapper[4660]: I0129 12:10:52.134012 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7484f6b95f-gp9pj" Jan 29 12:11:03 crc kubenswrapper[4660]: I0129 12:11:03.266576 4660 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.029422 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fb894"] Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.030566 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.047495 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fb894"] Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.130703 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-tls\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.130761 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6f4dea29-550e-4aa5-bbf1-c24a337c1229-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.130786 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-trusted-ca\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.130908 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-certificates\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.131017 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6f4dea29-550e-4aa5-bbf1-c24a337c1229-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.131055 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-bound-sa-token\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.131091 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sls6d\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-kube-api-access-sls6d\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.131154 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.169284 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232337 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6f4dea29-550e-4aa5-bbf1-c24a337c1229-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232478 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-bound-sa-token\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232518 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sls6d\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-kube-api-access-sls6d\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232571 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-tls\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232601 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6f4dea29-550e-4aa5-bbf1-c24a337c1229-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232625 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-trusted-ca\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.232649 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-certificates\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.233450 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6f4dea29-550e-4aa5-bbf1-c24a337c1229-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.234024 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-certificates\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.234392 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f4dea29-550e-4aa5-bbf1-c24a337c1229-trusted-ca\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.237889 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6f4dea29-550e-4aa5-bbf1-c24a337c1229-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.238192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-registry-tls\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.260242 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-bound-sa-token\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.264822 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sls6d\" (UniqueName: \"kubernetes.io/projected/6f4dea29-550e-4aa5-bbf1-c24a337c1229-kube-api-access-sls6d\") pod \"image-registry-66df7c8f76-fb894\" (UID: \"6f4dea29-550e-4aa5-bbf1-c24a337c1229\") " pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.343830 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.753334 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fb894"] Jan 29 12:11:30 crc kubenswrapper[4660]: I0129 12:11:30.914937 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" event={"ID":"6f4dea29-550e-4aa5-bbf1-c24a337c1229","Type":"ContainerStarted","Data":"dceb692d381a95d7e057ddee74b10c1a138c2a031dcae6dd05155dea711e3c9b"} Jan 29 12:11:31 crc kubenswrapper[4660]: I0129 12:11:31.927210 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" event={"ID":"6f4dea29-550e-4aa5-bbf1-c24a337c1229","Type":"ContainerStarted","Data":"6f85a3585d8e528d944f12c16e8d27911035720d95c8336e64bc4d53b3779e86"} Jan 29 12:11:31 crc kubenswrapper[4660]: I0129 12:11:31.927588 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:31 crc kubenswrapper[4660]: I0129 12:11:31.948144 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" podStartSLOduration=1.948053372 podStartE2EDuration="1.948053372s" podCreationTimestamp="2026-01-29 12:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:11:31.945357874 +0000 UTC m=+329.168300006" watchObservedRunningTime="2026-01-29 12:11:31.948053372 +0000 UTC m=+329.170995504" Jan 29 12:11:50 crc kubenswrapper[4660]: I0129 12:11:50.349251 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fb894" Jan 29 12:11:50 crc kubenswrapper[4660]: I0129 12:11:50.410486 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:11:56 crc kubenswrapper[4660]: I0129 12:11:56.269439 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:56 crc kubenswrapper[4660]: I0129 12:11:56.269791 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.457817 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerName="registry" containerID="cri-o://06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d" gracePeriod=30 Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.553544 4660 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-sn58d container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.19:5000/healthz\": dial tcp 10.217.0.19:5000: connect: connection refused" start-of-body= Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.553628 4660 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.19:5000/healthz\": dial tcp 10.217.0.19:5000: connect: connection refused" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.790705 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.972992 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw67j\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.973102 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.973141 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.973197 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.973963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.974151 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.974214 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.974263 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.974427 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\" (UID: \"ef47e61f-9c90-4ccc-af09-58fcdb99b371\") " Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.974671 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.975115 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.979473 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.979843 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.982561 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j" (OuterVolumeSpecName: "kube-api-access-rw67j") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "kube-api-access-rw67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.986739 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.987474 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:12:15 crc kubenswrapper[4660]: I0129 12:12:15.990174 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ef47e61f-9c90-4ccc-af09-58fcdb99b371" (UID: "ef47e61f-9c90-4ccc-af09-58fcdb99b371"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.075979 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw67j\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-kube-api-access-rw67j\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.076012 4660 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef47e61f-9c90-4ccc-af09-58fcdb99b371-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.076021 4660 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.076030 4660 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef47e61f-9c90-4ccc-af09-58fcdb99b371-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.076038 4660 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.076047 4660 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef47e61f-9c90-4ccc-af09-58fcdb99b371-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.145873 4660 generic.go:334] "Generic (PLEG): container finished" podID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerID="06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d" exitCode=0 Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.145904 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" event={"ID":"ef47e61f-9c90-4ccc-af09-58fcdb99b371","Type":"ContainerDied","Data":"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d"} Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.145946 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.145963 4660 scope.go:117] "RemoveContainer" containerID="06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.145949 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sn58d" event={"ID":"ef47e61f-9c90-4ccc-af09-58fcdb99b371","Type":"ContainerDied","Data":"8741571506dc666b9d9e85e736689d206061bc889bd5b6b3e7a15b6e4c4eafb7"} Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.159651 4660 scope.go:117] "RemoveContainer" containerID="06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d" Jan 29 12:12:16 crc kubenswrapper[4660]: E0129 12:12:16.160087 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d\": container with ID starting with 06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d not found: ID does not exist" containerID="06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.160122 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d"} err="failed to get container status \"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d\": rpc error: code = NotFound desc = could not find container \"06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d\": container with ID starting with 06343fea36d88df72c7002fb05a7f4f6058959f589dc705390550f89ada0bf4d not found: ID does not exist" Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.191739 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:12:16 crc kubenswrapper[4660]: I0129 12:12:16.195395 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sn58d"] Jan 29 12:12:17 crc kubenswrapper[4660]: I0129 12:12:17.479212 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" path="/var/lib/kubelet/pods/ef47e61f-9c90-4ccc-af09-58fcdb99b371/volumes" Jan 29 12:12:26 crc kubenswrapper[4660]: I0129 12:12:26.269090 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:12:26 crc kubenswrapper[4660]: I0129 12:12:26.269544 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:12:56 crc kubenswrapper[4660]: I0129 12:12:56.269952 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:12:56 crc kubenswrapper[4660]: I0129 12:12:56.271695 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:12:56 crc kubenswrapper[4660]: I0129 12:12:56.271896 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:12:56 crc kubenswrapper[4660]: I0129 12:12:56.272641 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:12:56 crc kubenswrapper[4660]: I0129 12:12:56.272823 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc" gracePeriod=600 Jan 29 12:12:57 crc kubenswrapper[4660]: I0129 12:12:57.381537 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc" exitCode=0 Jan 29 12:12:57 crc kubenswrapper[4660]: I0129 12:12:57.381609 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc"} Jan 29 12:12:57 crc kubenswrapper[4660]: I0129 12:12:57.382167 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734"} Jan 29 12:12:57 crc kubenswrapper[4660]: I0129 12:12:57.382189 4660 scope.go:117] "RemoveContainer" containerID="587a088c60cf6f8f2655e98b1d53cb42f8da84ed0598fdc5447a1784739afc0d" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.179318 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh"] Jan 29 12:15:00 crc kubenswrapper[4660]: E0129 12:15:00.180083 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.180095 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.180221 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef47e61f-9c90-4ccc-af09-58fcdb99b371" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.180601 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.182585 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.183007 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.192738 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh"] Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.291048 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mpx\" (UniqueName: \"kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.291108 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.291156 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.392296 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mpx\" (UniqueName: \"kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.392406 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.392493 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.393829 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.401829 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.420564 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mpx\" (UniqueName: \"kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx\") pod \"collect-profiles-29494815-2fpsh\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.498046 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:00 crc kubenswrapper[4660]: I0129 12:15:00.686578 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh"] Jan 29 12:15:01 crc kubenswrapper[4660]: I0129 12:15:01.223861 4660 generic.go:334] "Generic (PLEG): container finished" podID="189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" containerID="b188f9c7a9e2b7415c620c45006101ac0b112977b52b2569f1f291412d65e3f4" exitCode=0 Jan 29 12:15:01 crc kubenswrapper[4660]: I0129 12:15:01.223922 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" event={"ID":"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4","Type":"ContainerDied","Data":"b188f9c7a9e2b7415c620c45006101ac0b112977b52b2569f1f291412d65e3f4"} Jan 29 12:15:01 crc kubenswrapper[4660]: I0129 12:15:01.224370 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" event={"ID":"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4","Type":"ContainerStarted","Data":"a847d98bda037660a53d42f80ec80ca73edbc0a0541c8f5ba1eccc1edef1e517"} Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.436144 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.620206 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mpx\" (UniqueName: \"kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx\") pod \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.620335 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume\") pod \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.620416 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume\") pod \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\" (UID: \"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4\") " Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.621718 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" (UID: "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.624858 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" (UID: "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.624868 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx" (OuterVolumeSpecName: "kube-api-access-p7mpx") pod "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" (UID: "189d7d7d-910a-41ae-bee0-0fa4ac0e90d4"). InnerVolumeSpecName "kube-api-access-p7mpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.721760 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mpx\" (UniqueName: \"kubernetes.io/projected/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-kube-api-access-p7mpx\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.721798 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:02 crc kubenswrapper[4660]: I0129 12:15:02.721808 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:03 crc kubenswrapper[4660]: I0129 12:15:03.237720 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" event={"ID":"189d7d7d-910a-41ae-bee0-0fa4ac0e90d4","Type":"ContainerDied","Data":"a847d98bda037660a53d42f80ec80ca73edbc0a0541c8f5ba1eccc1edef1e517"} Jan 29 12:15:03 crc kubenswrapper[4660]: I0129 12:15:03.237750 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh" Jan 29 12:15:03 crc kubenswrapper[4660]: I0129 12:15:03.237757 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a847d98bda037660a53d42f80ec80ca73edbc0a0541c8f5ba1eccc1edef1e517" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.811744 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g"] Jan 29 12:15:18 crc kubenswrapper[4660]: E0129 12:15:18.812596 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" containerName="collect-profiles" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.812614 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" containerName="collect-profiles" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.812745 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" containerName="collect-profiles" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.813182 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.818118 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hhj85" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.819601 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ljhct"] Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.820444 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ljhct" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.822464 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.828829 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.829124 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cv2zh" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.833520 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g"] Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.842887 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4nd9k"] Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.843604 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.848891 4660 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8jfjn" Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.857104 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ljhct"] Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.871917 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4nd9k"] Jan 29 12:15:18 crc kubenswrapper[4660]: I0129 12:15:18.915007 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2kv\" (UniqueName: \"kubernetes.io/projected/fb17e443-58ad-4928-9781-b9e041b9b5d9-kube-api-access-sg2kv\") pod \"cert-manager-cainjector-cf98fcc89-c9r5g\" (UID: \"fb17e443-58ad-4928-9781-b9e041b9b5d9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.016649 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86pm\" (UniqueName: \"kubernetes.io/projected/54caec82-1193-4ecb-a591-48fbe5587225-kube-api-access-g86pm\") pod \"cert-manager-858654f9db-ljhct\" (UID: \"54caec82-1193-4ecb-a591-48fbe5587225\") " pod="cert-manager/cert-manager-858654f9db-ljhct" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.016713 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpfmn\" (UniqueName: \"kubernetes.io/projected/52340342-62d7-46e4-af31-d17f8a4bed1e-kube-api-access-rpfmn\") pod \"cert-manager-webhook-687f57d79b-4nd9k\" (UID: \"52340342-62d7-46e4-af31-d17f8a4bed1e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.016759 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2kv\" (UniqueName: \"kubernetes.io/projected/fb17e443-58ad-4928-9781-b9e041b9b5d9-kube-api-access-sg2kv\") pod \"cert-manager-cainjector-cf98fcc89-c9r5g\" (UID: \"fb17e443-58ad-4928-9781-b9e041b9b5d9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.040021 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2kv\" (UniqueName: \"kubernetes.io/projected/fb17e443-58ad-4928-9781-b9e041b9b5d9-kube-api-access-sg2kv\") pod \"cert-manager-cainjector-cf98fcc89-c9r5g\" (UID: \"fb17e443-58ad-4928-9781-b9e041b9b5d9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.117415 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86pm\" (UniqueName: \"kubernetes.io/projected/54caec82-1193-4ecb-a591-48fbe5587225-kube-api-access-g86pm\") pod \"cert-manager-858654f9db-ljhct\" (UID: \"54caec82-1193-4ecb-a591-48fbe5587225\") " pod="cert-manager/cert-manager-858654f9db-ljhct" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.117493 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpfmn\" (UniqueName: \"kubernetes.io/projected/52340342-62d7-46e4-af31-d17f8a4bed1e-kube-api-access-rpfmn\") pod \"cert-manager-webhook-687f57d79b-4nd9k\" (UID: \"52340342-62d7-46e4-af31-d17f8a4bed1e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.129291 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.141995 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86pm\" (UniqueName: \"kubernetes.io/projected/54caec82-1193-4ecb-a591-48fbe5587225-kube-api-access-g86pm\") pod \"cert-manager-858654f9db-ljhct\" (UID: \"54caec82-1193-4ecb-a591-48fbe5587225\") " pod="cert-manager/cert-manager-858654f9db-ljhct" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.144614 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpfmn\" (UniqueName: \"kubernetes.io/projected/52340342-62d7-46e4-af31-d17f8a4bed1e-kube-api-access-rpfmn\") pod \"cert-manager-webhook-687f57d79b-4nd9k\" (UID: \"52340342-62d7-46e4-af31-d17f8a4bed1e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.166257 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.424348 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g"] Jan 29 12:15:19 crc kubenswrapper[4660]: W0129 12:15:19.438342 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb17e443_58ad_4928_9781_b9e041b9b5d9.slice/crio-a78029b345c3010a966031f264fb225bf2cdf255ac13a5bb1a97dbee83f30f3d WatchSource:0}: Error finding container a78029b345c3010a966031f264fb225bf2cdf255ac13a5bb1a97dbee83f30f3d: Status 404 returned error can't find the container with id a78029b345c3010a966031f264fb225bf2cdf255ac13a5bb1a97dbee83f30f3d Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.439537 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ljhct" Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.441474 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:15:19 crc kubenswrapper[4660]: W0129 12:15:19.481910 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52340342_62d7_46e4_af31_d17f8a4bed1e.slice/crio-987a94aec585c2e11d9b5edff011d7d4c6dc9a725449712dd3bf22a1e7f2377b WatchSource:0}: Error finding container 987a94aec585c2e11d9b5edff011d7d4c6dc9a725449712dd3bf22a1e7f2377b: Status 404 returned error can't find the container with id 987a94aec585c2e11d9b5edff011d7d4c6dc9a725449712dd3bf22a1e7f2377b Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.486021 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4nd9k"] Jan 29 12:15:19 crc kubenswrapper[4660]: I0129 12:15:19.633346 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ljhct"] Jan 29 12:15:19 crc kubenswrapper[4660]: W0129 12:15:19.639366 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54caec82_1193_4ecb_a591_48fbe5587225.slice/crio-83d332df9aa462321cb63c2ba0add119f77ba94e20b2d64ea2ec83a3df608a2d WatchSource:0}: Error finding container 83d332df9aa462321cb63c2ba0add119f77ba94e20b2d64ea2ec83a3df608a2d: Status 404 returned error can't find the container with id 83d332df9aa462321cb63c2ba0add119f77ba94e20b2d64ea2ec83a3df608a2d Jan 29 12:15:20 crc kubenswrapper[4660]: I0129 12:15:20.331095 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" event={"ID":"fb17e443-58ad-4928-9781-b9e041b9b5d9","Type":"ContainerStarted","Data":"a78029b345c3010a966031f264fb225bf2cdf255ac13a5bb1a97dbee83f30f3d"} Jan 29 12:15:20 crc kubenswrapper[4660]: I0129 12:15:20.335966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" event={"ID":"52340342-62d7-46e4-af31-d17f8a4bed1e","Type":"ContainerStarted","Data":"987a94aec585c2e11d9b5edff011d7d4c6dc9a725449712dd3bf22a1e7f2377b"} Jan 29 12:15:20 crc kubenswrapper[4660]: I0129 12:15:20.337216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ljhct" event={"ID":"54caec82-1193-4ecb-a591-48fbe5587225","Type":"ContainerStarted","Data":"83d332df9aa462321cb63c2ba0add119f77ba94e20b2d64ea2ec83a3df608a2d"} Jan 29 12:15:25 crc kubenswrapper[4660]: I0129 12:15:25.379198 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" event={"ID":"52340342-62d7-46e4-af31-d17f8a4bed1e","Type":"ContainerStarted","Data":"7b69658a883f619b8af4ae716c8909e2f3cb44b263f1ffa0cdf2865825cddd83"} Jan 29 12:15:25 crc kubenswrapper[4660]: I0129 12:15:25.379827 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:25 crc kubenswrapper[4660]: I0129 12:15:25.399271 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" podStartSLOduration=2.628424919 podStartE2EDuration="7.399252015s" podCreationTimestamp="2026-01-29 12:15:18 +0000 UTC" firstStartedPulling="2026-01-29 12:15:19.484335558 +0000 UTC m=+556.707277690" lastFinishedPulling="2026-01-29 12:15:24.255162654 +0000 UTC m=+561.478104786" observedRunningTime="2026-01-29 12:15:25.397805143 +0000 UTC m=+562.620747275" watchObservedRunningTime="2026-01-29 12:15:25.399252015 +0000 UTC m=+562.622194147" Jan 29 12:15:26 crc kubenswrapper[4660]: I0129 12:15:26.269135 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:15:26 crc kubenswrapper[4660]: I0129 12:15:26.269492 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:26 crc kubenswrapper[4660]: I0129 12:15:26.385322 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" event={"ID":"fb17e443-58ad-4928-9781-b9e041b9b5d9","Type":"ContainerStarted","Data":"f279ea9557636c87e64be8f4a1d7b679ec56e3caf935410ae6989f7d476f4f07"} Jan 29 12:15:26 crc kubenswrapper[4660]: I0129 12:15:26.388717 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ljhct" event={"ID":"54caec82-1193-4ecb-a591-48fbe5587225","Type":"ContainerStarted","Data":"3a750ed09a2f176aae45d1bcc33c6acb66a19cf729b724017176da1861f3fc6f"} Jan 29 12:15:26 crc kubenswrapper[4660]: I0129 12:15:26.417262 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c9r5g" podStartSLOduration=2.023251896 podStartE2EDuration="8.417239181s" podCreationTimestamp="2026-01-29 12:15:18 +0000 UTC" firstStartedPulling="2026-01-29 12:15:19.440957434 +0000 UTC m=+556.663899566" lastFinishedPulling="2026-01-29 12:15:25.834944709 +0000 UTC m=+563.057886851" observedRunningTime="2026-01-29 12:15:26.404126262 +0000 UTC m=+563.627068394" watchObservedRunningTime="2026-01-29 12:15:26.417239181 +0000 UTC m=+563.640181323" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.232316 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ljhct" podStartSLOduration=4.038214212 podStartE2EDuration="10.232292618s" podCreationTimestamp="2026-01-29 12:15:18 +0000 UTC" firstStartedPulling="2026-01-29 12:15:19.647822414 +0000 UTC m=+556.870764546" lastFinishedPulling="2026-01-29 12:15:25.8419008 +0000 UTC m=+563.064842952" observedRunningTime="2026-01-29 12:15:26.418890758 +0000 UTC m=+563.641832890" watchObservedRunningTime="2026-01-29 12:15:28.232292618 +0000 UTC m=+565.455234750" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235135 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-clbcs"] Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235581 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-controller" containerID="cri-o://c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235665 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="nbdb" containerID="cri-o://dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235759 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="northd" containerID="cri-o://52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235848 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235900 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-node" containerID="cri-o://61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.235944 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-acl-logging" containerID="cri-o://fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.236194 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="sbdb" containerID="cri-o://a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.274581 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" containerID="cri-o://1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" gracePeriod=30 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.402430 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovnkube-controller/3.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.404136 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-acl-logging/0.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.404795 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-controller/0.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405102 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" exitCode=0 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405122 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" exitCode=0 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405129 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" exitCode=0 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405136 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" exitCode=0 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405142 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" exitCode=143 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405148 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" exitCode=143 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405185 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405210 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405221 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405230 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405239 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405248 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.405263 4660 scope.go:117] "RemoveContainer" containerID="b0a9d4fe8b3f603a0e9250b85bd33354143459d5f7db3b889771ddde6b600637" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.407979 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/2.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.409746 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/1.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.409780 4660 generic.go:334] "Generic (PLEG): container finished" podID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" containerID="796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137" exitCode=2 Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.409801 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerDied","Data":"796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137"} Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.410285 4660 scope.go:117] "RemoveContainer" containerID="796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.410443 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vb4nc_openshift-multus(f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3)\"" pod="openshift-multus/multus-vb4nc" podUID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.445655 4660 scope.go:117] "RemoveContainer" containerID="799a1c742d7cd17cf905dd5492628f201d1c5861e4b848ab1b62961075aa828f" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.569089 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-acl-logging/0.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.569567 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-controller/0.log" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.570162 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625560 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kf4wg"] Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625813 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kubecfg-setup" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625828 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kubecfg-setup" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625836 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-node" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625841 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-node" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625855 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625866 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625876 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625885 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625897 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625907 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625917 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="nbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625924 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="nbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625935 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625942 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625949 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625957 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625966 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625972 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.625983 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="sbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.625990 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="sbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.626001 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-acl-logging" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626008 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-acl-logging" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.626019 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="northd" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626025 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="northd" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626142 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626155 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626165 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626173 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="nbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626180 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-acl-logging" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626187 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovn-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626195 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="sbdb" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626203 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626209 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="northd" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626217 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="kube-rbac-proxy-node" Jan 29 12:15:28 crc kubenswrapper[4660]: E0129 12:15:28.626301 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626308 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626390 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.626426 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="39de46a2-9cba-4331-aab2-697f0337563c" containerName="ovnkube-controller" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.628079 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.747903 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.747964 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.747997 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748022 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748041 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748075 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748101 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748125 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748282 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748316 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748398 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748423 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748454 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748482 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748512 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748537 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748564 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748602 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748626 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748648 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-267kg\" (UniqueName: \"kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg\") pod \"39de46a2-9cba-4331-aab2-697f0337563c\" (UID: \"39de46a2-9cba-4331-aab2-697f0337563c\") " Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748818 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-script-lib\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748854 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovn-node-metrics-cert\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748911 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-slash\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748944 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-node-log\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.748972 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-netd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749012 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-systemd-units\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749042 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-var-lib-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749066 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-systemd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749092 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749126 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-etc-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749170 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/d2144c5e-6b86-40f6-b018-2f4e5e318091-kube-api-access-j9s5n\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749202 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-config\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749243 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749285 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749302 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749324 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-kubelet\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749402 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-env-overrides\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749420 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749449 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749464 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-ovn\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749492 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log" (OuterVolumeSpecName: "node-log") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749537 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-log-socket\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749568 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-netns\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749586 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-bin\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749656 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749667 4660 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749678 4660 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749707 4660 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749745 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749768 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749888 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749962 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.749971 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750000 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750010 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750016 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750035 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750060 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750124 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket" (OuterVolumeSpecName: "log-socket") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750157 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.750825 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash" (OuterVolumeSpecName: "host-slash") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.755985 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg" (OuterVolumeSpecName: "kube-api-access-267kg") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "kube-api-access-267kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.756584 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.768676 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "39de46a2-9cba-4331-aab2-697f0337563c" (UID: "39de46a2-9cba-4331-aab2-697f0337563c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.850457 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.850933 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.850974 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-kubelet\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.850602 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851003 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-env-overrides\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851087 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-ovn\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851146 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-log-socket\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851153 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851222 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-log-socket\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851172 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-kubelet\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851224 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-netns\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851228 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-ovn\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851190 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-netns\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851333 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-bin\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851382 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-bin\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851397 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-script-lib\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851450 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovn-node-metrics-cert\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851491 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-slash\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851508 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-node-log\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851527 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-netd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851556 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-systemd-units\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851573 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-var-lib-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851593 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-systemd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851609 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851633 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-etc-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851714 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/d2144c5e-6b86-40f6-b018-2f4e5e318091-kube-api-access-j9s5n\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851733 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-config\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851801 4660 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851812 4660 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851822 4660 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851830 4660 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851839 4660 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851849 4660 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851857 4660 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851867 4660 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851871 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-env-overrides\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851877 4660 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39de46a2-9cba-4331-aab2-697f0337563c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851935 4660 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851950 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-267kg\" (UniqueName: \"kubernetes.io/projected/39de46a2-9cba-4331-aab2-697f0337563c-kube-api-access-267kg\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851965 4660 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851977 4660 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.851989 4660 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39de46a2-9cba-4331-aab2-697f0337563c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852001 4660 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852013 4660 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/39de46a2-9cba-4331-aab2-697f0337563c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852050 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-systemd-units\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852148 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-script-lib\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852192 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852218 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-var-lib-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852239 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-run-systemd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852260 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-etc-openvswitch\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852281 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-node-log\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852306 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-slash\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852314 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovnkube-config\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.852357 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2144c5e-6b86-40f6-b018-2f4e5e318091-host-cni-netd\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.858653 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d2144c5e-6b86-40f6-b018-2f4e5e318091-ovn-node-metrics-cert\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.871222 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9s5n\" (UniqueName: \"kubernetes.io/projected/d2144c5e-6b86-40f6-b018-2f4e5e318091-kube-api-access-j9s5n\") pod \"ovnkube-node-kf4wg\" (UID: \"d2144c5e-6b86-40f6-b018-2f4e5e318091\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: I0129 12:15:28.942032 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:28 crc kubenswrapper[4660]: W0129 12:15:28.962008 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2144c5e_6b86_40f6_b018_2f4e5e318091.slice/crio-006ad5f6378f8667bfd88a1ff7b6bcbafa4676373f9e7f23572db71ad292504f WatchSource:0}: Error finding container 006ad5f6378f8667bfd88a1ff7b6bcbafa4676373f9e7f23572db71ad292504f: Status 404 returned error can't find the container with id 006ad5f6378f8667bfd88a1ff7b6bcbafa4676373f9e7f23572db71ad292504f Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.169572 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4nd9k" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.416609 4660 generic.go:334] "Generic (PLEG): container finished" podID="d2144c5e-6b86-40f6-b018-2f4e5e318091" containerID="f3bed230da9f60f016c7077d78e37e28169a4b68b9777f8c3846dd90ef598f3a" exitCode=0 Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.416677 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerDied","Data":"f3bed230da9f60f016c7077d78e37e28169a4b68b9777f8c3846dd90ef598f3a"} Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.417741 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"006ad5f6378f8667bfd88a1ff7b6bcbafa4676373f9e7f23572db71ad292504f"} Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.420912 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/2.log" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.426446 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-acl-logging/0.log" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.426998 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-clbcs_39de46a2-9cba-4331-aab2-697f0337563c/ovn-controller/0.log" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.427612 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" exitCode=0 Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.427741 4660 generic.go:334] "Generic (PLEG): container finished" podID="39de46a2-9cba-4331-aab2-697f0337563c" containerID="52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" exitCode=0 Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.427815 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.427843 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1"} Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.428454 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4"} Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.428474 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-clbcs" event={"ID":"39de46a2-9cba-4331-aab2-697f0337563c","Type":"ContainerDied","Data":"8afad13a6c9d8b36803471e600bcd42714994724f588753cc69cdfa66673e5dd"} Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.428503 4660 scope.go:117] "RemoveContainer" containerID="1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.452666 4660 scope.go:117] "RemoveContainer" containerID="a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.485996 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-clbcs"] Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.488160 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-clbcs"] Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.488724 4660 scope.go:117] "RemoveContainer" containerID="dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.507637 4660 scope.go:117] "RemoveContainer" containerID="52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.532903 4660 scope.go:117] "RemoveContainer" containerID="fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.548913 4660 scope.go:117] "RemoveContainer" containerID="61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.575392 4660 scope.go:117] "RemoveContainer" containerID="fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.596010 4660 scope.go:117] "RemoveContainer" containerID="c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.632377 4660 scope.go:117] "RemoveContainer" containerID="46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.666077 4660 scope.go:117] "RemoveContainer" containerID="1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.668235 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac\": container with ID starting with 1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac not found: ID does not exist" containerID="1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.668286 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac"} err="failed to get container status \"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac\": rpc error: code = NotFound desc = could not find container \"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac\": container with ID starting with 1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.668308 4660 scope.go:117] "RemoveContainer" containerID="a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.668907 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\": container with ID starting with a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1 not found: ID does not exist" containerID="a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.668953 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1"} err="failed to get container status \"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\": rpc error: code = NotFound desc = could not find container \"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\": container with ID starting with a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.668980 4660 scope.go:117] "RemoveContainer" containerID="dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.669450 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\": container with ID starting with dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a not found: ID does not exist" containerID="dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.669473 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a"} err="failed to get container status \"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\": rpc error: code = NotFound desc = could not find container \"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\": container with ID starting with dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.669486 4660 scope.go:117] "RemoveContainer" containerID="52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.669714 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\": container with ID starting with 52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4 not found: ID does not exist" containerID="52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.669733 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4"} err="failed to get container status \"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\": rpc error: code = NotFound desc = could not find container \"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\": container with ID starting with 52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.669757 4660 scope.go:117] "RemoveContainer" containerID="fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.670152 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\": container with ID starting with fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478 not found: ID does not exist" containerID="fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.670175 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478"} err="failed to get container status \"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\": rpc error: code = NotFound desc = could not find container \"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\": container with ID starting with fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.670193 4660 scope.go:117] "RemoveContainer" containerID="61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.671838 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\": container with ID starting with 61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887 not found: ID does not exist" containerID="61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.671862 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887"} err="failed to get container status \"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\": rpc error: code = NotFound desc = could not find container \"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\": container with ID starting with 61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.671877 4660 scope.go:117] "RemoveContainer" containerID="fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.672109 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\": container with ID starting with fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399 not found: ID does not exist" containerID="fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672147 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399"} err="failed to get container status \"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\": rpc error: code = NotFound desc = could not find container \"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\": container with ID starting with fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672164 4660 scope.go:117] "RemoveContainer" containerID="c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.672405 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\": container with ID starting with c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f not found: ID does not exist" containerID="c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672432 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f"} err="failed to get container status \"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\": rpc error: code = NotFound desc = could not find container \"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\": container with ID starting with c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672457 4660 scope.go:117] "RemoveContainer" containerID="46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219" Jan 29 12:15:29 crc kubenswrapper[4660]: E0129 12:15:29.672648 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\": container with ID starting with 46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219 not found: ID does not exist" containerID="46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672676 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219"} err="failed to get container status \"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\": rpc error: code = NotFound desc = could not find container \"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\": container with ID starting with 46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.672710 4660 scope.go:117] "RemoveContainer" containerID="1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673042 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac"} err="failed to get container status \"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac\": rpc error: code = NotFound desc = could not find container \"1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac\": container with ID starting with 1f04aecf0641dbb3ffce38e933e35d01c4fe65a7c34466eb6dd9b820a2e01dac not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673070 4660 scope.go:117] "RemoveContainer" containerID="a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673288 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1"} err="failed to get container status \"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\": rpc error: code = NotFound desc = could not find container \"a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1\": container with ID starting with a38a9509d9a913227308d2763f47f13b5a0852e99e5687f5d46691c6882c12c1 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673314 4660 scope.go:117] "RemoveContainer" containerID="dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673795 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a"} err="failed to get container status \"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\": rpc error: code = NotFound desc = could not find container \"dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a\": container with ID starting with dcc38f41a7010226e466658ee660b52d39783e45b44279cd40ab7e8ff42bf72a not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.673823 4660 scope.go:117] "RemoveContainer" containerID="52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674031 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4"} err="failed to get container status \"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\": rpc error: code = NotFound desc = could not find container \"52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4\": container with ID starting with 52170fad8768e075531e961c6cd754bfd6a295f61f247d4924891b113a5e00d4 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674057 4660 scope.go:117] "RemoveContainer" containerID="fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674415 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478"} err="failed to get container status \"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\": rpc error: code = NotFound desc = could not find container \"fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478\": container with ID starting with fd4b57b6191669038cb8c482dc763e6a4bd3f290dd6e8fd1ef88a180ba3cb478 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674435 4660 scope.go:117] "RemoveContainer" containerID="61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674830 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887"} err="failed to get container status \"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\": rpc error: code = NotFound desc = could not find container \"61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887\": container with ID starting with 61b77a7bbedc2ae877ba18ad8b0d1747343548afc5b11e31a1354ef5e45d9887 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.674850 4660 scope.go:117] "RemoveContainer" containerID="fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.675114 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399"} err="failed to get container status \"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\": rpc error: code = NotFound desc = could not find container \"fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399\": container with ID starting with fc087fac5a2b7201d30df072a78948362670072d596ef20806508257b85a6399 not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.675131 4660 scope.go:117] "RemoveContainer" containerID="c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.675312 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f"} err="failed to get container status \"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\": rpc error: code = NotFound desc = could not find container \"c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f\": container with ID starting with c753559a4abb4b38c3654dde85e9babbcc5799f904f93a8af13dd3aa7165029f not found: ID does not exist" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.675329 4660 scope.go:117] "RemoveContainer" containerID="46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219" Jan 29 12:15:29 crc kubenswrapper[4660]: I0129 12:15:29.675553 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219"} err="failed to get container status \"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\": rpc error: code = NotFound desc = could not find container \"46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219\": container with ID starting with 46a53d43606a7d3ba58764269a22dc77ea625ce30c1ef422a097cbdf72fa4219 not found: ID does not exist" Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439029 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"209f32891dfec13c7857ee99fbcb589d7190ee3028e0ee1e8cb7f7e8ac1893c4"} Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439568 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"205348ddc5d9f74557452f3bae5de7dc90b771c2bf8c40fd5834997e03e44243"} Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439608 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"0a9604a43bb45a90d98f0f571dfad23e1dc69ce84434269ced1f200bf26340c1"} Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439633 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"028406b7357527d09cdcad027bd54b22c76b38fdc8fc0fa38da14f9490c76e33"} Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439657 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"b68d922a529ada503a18508875d9cc89fcc79692f07f03f99e3be3ce4996f81a"} Jan 29 12:15:30 crc kubenswrapper[4660]: I0129 12:15:30.439680 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"0d459ecba398932dcd48f02fba97154f21cd45ca1c28e98a460706990dca92c7"} Jan 29 12:15:31 crc kubenswrapper[4660]: I0129 12:15:31.475789 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39de46a2-9cba-4331-aab2-697f0337563c" path="/var/lib/kubelet/pods/39de46a2-9cba-4331-aab2-697f0337563c/volumes" Jan 29 12:15:32 crc kubenswrapper[4660]: I0129 12:15:32.452440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"ee1fcffdefc335dc327cb6801832f426fb12fe3361aea406398ce9ceb2a3e586"} Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.476260 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" event={"ID":"d2144c5e-6b86-40f6-b018-2f4e5e318091","Type":"ContainerStarted","Data":"271b7f66c0bf9a366f1fda9be83cb924cdfe2acc5edaa37e7c083afd3a99fa09"} Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.476818 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.476891 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.476953 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.514521 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.535747 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" podStartSLOduration=7.53572298 podStartE2EDuration="7.53572298s" podCreationTimestamp="2026-01-29 12:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:15:35.528017457 +0000 UTC m=+572.750959599" watchObservedRunningTime="2026-01-29 12:15:35.53572298 +0000 UTC m=+572.758665122" Jan 29 12:15:35 crc kubenswrapper[4660]: I0129 12:15:35.540410 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:15:42 crc kubenswrapper[4660]: I0129 12:15:42.470006 4660 scope.go:117] "RemoveContainer" containerID="796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137" Jan 29 12:15:42 crc kubenswrapper[4660]: E0129 12:15:42.470862 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vb4nc_openshift-multus(f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3)\"" pod="openshift-multus/multus-vb4nc" podUID="f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3" Jan 29 12:15:56 crc kubenswrapper[4660]: I0129 12:15:56.269240 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:15:56 crc kubenswrapper[4660]: I0129 12:15:56.269804 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:56 crc kubenswrapper[4660]: I0129 12:15:56.470140 4660 scope.go:117] "RemoveContainer" containerID="796a63e053200b4ace3f5e3866d8bf84e7f5ce390f9cfa7c5aceb54155e1c137" Jan 29 12:15:57 crc kubenswrapper[4660]: I0129 12:15:57.594231 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vb4nc_f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3/kube-multus/2.log" Jan 29 12:15:57 crc kubenswrapper[4660]: I0129 12:15:57.594520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vb4nc" event={"ID":"f3d2c3c2-a0ef-4204-b7c1-533e7ee29ee3","Type":"ContainerStarted","Data":"cee5f5cceae4c1b134c551849de71a846f2bb08633d4ada9165a4d9d518f138e"} Jan 29 12:15:58 crc kubenswrapper[4660]: I0129 12:15:58.967485 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf4wg" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.429246 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj"] Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.430866 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.437282 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.478749 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4wl\" (UniqueName: \"kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.479107 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.479141 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.508872 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj"] Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.579919 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg4wl\" (UniqueName: \"kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.580199 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.580291 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.580839 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.581165 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.600558 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg4wl\" (UniqueName: \"kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:09 crc kubenswrapper[4660]: I0129 12:16:09.752781 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:10 crc kubenswrapper[4660]: I0129 12:16:10.147002 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj"] Jan 29 12:16:10 crc kubenswrapper[4660]: W0129 12:16:10.155806 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode86dcb10_336d_41f0_a29e_bcf75712b335.slice/crio-b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a WatchSource:0}: Error finding container b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a: Status 404 returned error can't find the container with id b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a Jan 29 12:16:10 crc kubenswrapper[4660]: I0129 12:16:10.681356 4660 generic.go:334] "Generic (PLEG): container finished" podID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerID="cf0afb5bf5e980d4d0c883c154edabdc031d0ea5fd6e4492e52e710ee22b1e74" exitCode=0 Jan 29 12:16:10 crc kubenswrapper[4660]: I0129 12:16:10.681520 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerDied","Data":"cf0afb5bf5e980d4d0c883c154edabdc031d0ea5fd6e4492e52e710ee22b1e74"} Jan 29 12:16:10 crc kubenswrapper[4660]: I0129 12:16:10.681844 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerStarted","Data":"b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a"} Jan 29 12:16:11 crc kubenswrapper[4660]: I0129 12:16:11.688833 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerStarted","Data":"ad84e0cc9ee6f923f6b71f473bddb18a45ff4c71d9511da130610276e33a0d4b"} Jan 29 12:16:12 crc kubenswrapper[4660]: I0129 12:16:12.694881 4660 generic.go:334] "Generic (PLEG): container finished" podID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerID="ad84e0cc9ee6f923f6b71f473bddb18a45ff4c71d9511da130610276e33a0d4b" exitCode=0 Jan 29 12:16:12 crc kubenswrapper[4660]: I0129 12:16:12.694915 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerDied","Data":"ad84e0cc9ee6f923f6b71f473bddb18a45ff4c71d9511da130610276e33a0d4b"} Jan 29 12:16:13 crc kubenswrapper[4660]: I0129 12:16:13.706571 4660 generic.go:334] "Generic (PLEG): container finished" podID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerID="7c0b21cbbb58590a56aa3a8d01012cb53207593f588c19285939b3527d47301c" exitCode=0 Jan 29 12:16:13 crc kubenswrapper[4660]: I0129 12:16:13.706638 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerDied","Data":"7c0b21cbbb58590a56aa3a8d01012cb53207593f588c19285939b3527d47301c"} Jan 29 12:16:14 crc kubenswrapper[4660]: I0129 12:16:14.896856 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.042998 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg4wl\" (UniqueName: \"kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl\") pod \"e86dcb10-336d-41f0-a29e-bcf75712b335\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.043106 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle\") pod \"e86dcb10-336d-41f0-a29e-bcf75712b335\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.043159 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util\") pod \"e86dcb10-336d-41f0-a29e-bcf75712b335\" (UID: \"e86dcb10-336d-41f0-a29e-bcf75712b335\") " Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.043956 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle" (OuterVolumeSpecName: "bundle") pod "e86dcb10-336d-41f0-a29e-bcf75712b335" (UID: "e86dcb10-336d-41f0-a29e-bcf75712b335"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.050631 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl" (OuterVolumeSpecName: "kube-api-access-bg4wl") pod "e86dcb10-336d-41f0-a29e-bcf75712b335" (UID: "e86dcb10-336d-41f0-a29e-bcf75712b335"). InnerVolumeSpecName "kube-api-access-bg4wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.057104 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util" (OuterVolumeSpecName: "util") pod "e86dcb10-336d-41f0-a29e-bcf75712b335" (UID: "e86dcb10-336d-41f0-a29e-bcf75712b335"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.144503 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.144551 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e86dcb10-336d-41f0-a29e-bcf75712b335-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.144572 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg4wl\" (UniqueName: \"kubernetes.io/projected/e86dcb10-336d-41f0-a29e-bcf75712b335-kube-api-access-bg4wl\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.721182 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" event={"ID":"e86dcb10-336d-41f0-a29e-bcf75712b335","Type":"ContainerDied","Data":"b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a"} Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.721247 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1011a2bf13035195228387f826fee24391702734c17229bc4d883dc7677dc1a" Jan 29 12:16:15 crc kubenswrapper[4660]: I0129 12:16:15.721358 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.131989 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-55lhd"] Jan 29 12:16:17 crc kubenswrapper[4660]: E0129 12:16:17.132353 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="pull" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.132363 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="pull" Jan 29 12:16:17 crc kubenswrapper[4660]: E0129 12:16:17.132371 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="extract" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.132377 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="extract" Jan 29 12:16:17 crc kubenswrapper[4660]: E0129 12:16:17.132387 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="util" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.132393 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="util" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.132476 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86dcb10-336d-41f0-a29e-bcf75712b335" containerName="extract" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.132854 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.134762 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.135184 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-v42bl" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.143922 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.148037 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-55lhd"] Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.168053 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbrn\" (UniqueName: \"kubernetes.io/projected/86c25304-5d7d-46c2-b033-0c225c08f448-kube-api-access-xgbrn\") pod \"nmstate-operator-646758c888-55lhd\" (UID: \"86c25304-5d7d-46c2-b033-0c225c08f448\") " pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.269053 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgbrn\" (UniqueName: \"kubernetes.io/projected/86c25304-5d7d-46c2-b033-0c225c08f448-kube-api-access-xgbrn\") pod \"nmstate-operator-646758c888-55lhd\" (UID: \"86c25304-5d7d-46c2-b033-0c225c08f448\") " pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.285635 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgbrn\" (UniqueName: \"kubernetes.io/projected/86c25304-5d7d-46c2-b033-0c225c08f448-kube-api-access-xgbrn\") pod \"nmstate-operator-646758c888-55lhd\" (UID: \"86c25304-5d7d-46c2-b033-0c225c08f448\") " pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.445611 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.669161 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-55lhd"] Jan 29 12:16:17 crc kubenswrapper[4660]: W0129 12:16:17.674046 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86c25304_5d7d_46c2_b033_0c225c08f448.slice/crio-89538b8dd90020ecdcffb205648930a85c8d51695ea24bf2bb706c3f20e1c55c WatchSource:0}: Error finding container 89538b8dd90020ecdcffb205648930a85c8d51695ea24bf2bb706c3f20e1c55c: Status 404 returned error can't find the container with id 89538b8dd90020ecdcffb205648930a85c8d51695ea24bf2bb706c3f20e1c55c Jan 29 12:16:17 crc kubenswrapper[4660]: I0129 12:16:17.747577 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" event={"ID":"86c25304-5d7d-46c2-b033-0c225c08f448","Type":"ContainerStarted","Data":"89538b8dd90020ecdcffb205648930a85c8d51695ea24bf2bb706c3f20e1c55c"} Jan 29 12:16:19 crc kubenswrapper[4660]: I0129 12:16:19.761154 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" event={"ID":"86c25304-5d7d-46c2-b033-0c225c08f448","Type":"ContainerStarted","Data":"f01452ef5f1fc17aa922dde0259c183d4752599185b1ee0b2f305c800b4e83ac"} Jan 29 12:16:19 crc kubenswrapper[4660]: I0129 12:16:19.792979 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-55lhd" podStartSLOduration=0.943528668 podStartE2EDuration="2.792953337s" podCreationTimestamp="2026-01-29 12:16:17 +0000 UTC" firstStartedPulling="2026-01-29 12:16:17.676621772 +0000 UTC m=+614.899563904" lastFinishedPulling="2026-01-29 12:16:19.526046441 +0000 UTC m=+616.748988573" observedRunningTime="2026-01-29 12:16:19.778779497 +0000 UTC m=+617.001721629" watchObservedRunningTime="2026-01-29 12:16:19.792953337 +0000 UTC m=+617.015895489" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.749296 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5qnsn"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.750637 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.753640 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kcnk5" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.776369 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5qnsn"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.789782 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.790610 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.794537 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.804868 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.810390 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzw6\" (UniqueName: \"kubernetes.io/projected/7c673176-aa01-4a2f-b319-f81e28800e05-kube-api-access-glzw6\") pod \"nmstate-metrics-54757c584b-5qnsn\" (UID: \"7c673176-aa01-4a2f-b319-f81e28800e05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.810446 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jrxb\" (UniqueName: \"kubernetes.io/projected/87245509-8882-4337-88dc-9300b488472d-kube-api-access-7jrxb\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.810529 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.823454 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-l7psx"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.827649 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.911392 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jrxb\" (UniqueName: \"kubernetes.io/projected/87245509-8882-4337-88dc-9300b488472d-kube-api-access-7jrxb\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.911643 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.911721 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzw6\" (UniqueName: \"kubernetes.io/projected/7c673176-aa01-4a2f-b319-f81e28800e05-kube-api-access-glzw6\") pod \"nmstate-metrics-54757c584b-5qnsn\" (UID: \"7c673176-aa01-4a2f-b319-f81e28800e05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" Jan 29 12:16:20 crc kubenswrapper[4660]: E0129 12:16:20.911993 4660 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 12:16:20 crc kubenswrapper[4660]: E0129 12:16:20.912059 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair podName:87245509-8882-4337-88dc-9300b488472d nodeName:}" failed. No retries permitted until 2026-01-29 12:16:21.412038695 +0000 UTC m=+618.634980827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-j8hgw" (UID: "87245509-8882-4337-88dc-9300b488472d") : secret "openshift-nmstate-webhook" not found Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.940260 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzw6\" (UniqueName: \"kubernetes.io/projected/7c673176-aa01-4a2f-b319-f81e28800e05-kube-api-access-glzw6\") pod \"nmstate-metrics-54757c584b-5qnsn\" (UID: \"7c673176-aa01-4a2f-b319-f81e28800e05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.943978 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jrxb\" (UniqueName: \"kubernetes.io/projected/87245509-8882-4337-88dc-9300b488472d-kube-api-access-7jrxb\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.947212 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x"] Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.947998 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.949675 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.949872 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4vlkc" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.950030 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 12:16:20 crc kubenswrapper[4660]: I0129 12:16:20.964404 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x"] Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.012506 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9n8\" (UniqueName: \"kubernetes.io/projected/0289a953-d506-42a8-89ff-fb018ab0d5cd-kube-api-access-bv9n8\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.012575 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-dbus-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.012852 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-nmstate-lock\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.012974 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-ovs-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.071277 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114587 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9n8\" (UniqueName: \"kubernetes.io/projected/0289a953-d506-42a8-89ff-fb018ab0d5cd-kube-api-access-bv9n8\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114636 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grr4r\" (UniqueName: \"kubernetes.io/projected/d7fab196-9704-4991-8c67-8e0cadd2d4b5-kube-api-access-grr4r\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114667 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-dbus-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114748 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-nmstate-lock\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114788 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7fab196-9704-4991-8c67-8e0cadd2d4b5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114804 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7fab196-9704-4991-8c67-8e0cadd2d4b5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114824 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-ovs-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114890 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-ovs-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.114921 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-nmstate-lock\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.115033 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0289a953-d506-42a8-89ff-fb018ab0d5cd-dbus-socket\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.142283 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9n8\" (UniqueName: \"kubernetes.io/projected/0289a953-d506-42a8-89ff-fb018ab0d5cd-kube-api-access-bv9n8\") pod \"nmstate-handler-l7psx\" (UID: \"0289a953-d506-42a8-89ff-fb018ab0d5cd\") " pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.148882 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.205149 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7458d57f55-l2wkb"] Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.206044 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.218822 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-oauth-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.218877 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-console-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.218904 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-service-ca\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.218925 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-oauth-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219183 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grr4r\" (UniqueName: \"kubernetes.io/projected/d7fab196-9704-4991-8c67-8e0cadd2d4b5-kube-api-access-grr4r\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219264 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskmg\" (UniqueName: \"kubernetes.io/projected/a0283345-2f12-4c86-b70d-6165f1fd4041-kube-api-access-hskmg\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219316 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219361 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-trusted-ca-bundle\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219380 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7fab196-9704-4991-8c67-8e0cadd2d4b5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.219402 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7fab196-9704-4991-8c67-8e0cadd2d4b5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.222673 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7fab196-9704-4991-8c67-8e0cadd2d4b5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.230427 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7fab196-9704-4991-8c67-8e0cadd2d4b5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.240274 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7458d57f55-l2wkb"] Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.271506 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grr4r\" (UniqueName: \"kubernetes.io/projected/d7fab196-9704-4991-8c67-8e0cadd2d4b5-kube-api-access-grr4r\") pod \"nmstate-console-plugin-7754f76f8b-q564x\" (UID: \"d7fab196-9704-4991-8c67-8e0cadd2d4b5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.292549 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320463 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskmg\" (UniqueName: \"kubernetes.io/projected/a0283345-2f12-4c86-b70d-6165f1fd4041-kube-api-access-hskmg\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320510 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320546 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-trusted-ca-bundle\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320571 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-oauth-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320593 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-console-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320612 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-service-ca\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.320627 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-oauth-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.321517 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-oauth-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.322081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-service-ca\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.322338 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-console-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.323510 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0283345-2f12-4c86-b70d-6165f1fd4041-trusted-ca-bundle\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.329307 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-oauth-config\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.335598 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0283345-2f12-4c86-b70d-6165f1fd4041-console-serving-cert\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.351561 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskmg\" (UniqueName: \"kubernetes.io/projected/a0283345-2f12-4c86-b70d-6165f1fd4041-kube-api-access-hskmg\") pod \"console-7458d57f55-l2wkb\" (UID: \"a0283345-2f12-4c86-b70d-6165f1fd4041\") " pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.421626 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.424805 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87245509-8882-4337-88dc-9300b488472d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-j8hgw\" (UID: \"87245509-8882-4337-88dc-9300b488472d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.518265 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-5qnsn"] Jan 29 12:16:21 crc kubenswrapper[4660]: W0129 12:16:21.523127 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c673176_aa01_4a2f_b319_f81e28800e05.slice/crio-d856beb25e7df262c45c8268b185aa759b0fe109df6950e609b120bff6a6c966 WatchSource:0}: Error finding container d856beb25e7df262c45c8268b185aa759b0fe109df6950e609b120bff6a6c966: Status 404 returned error can't find the container with id d856beb25e7df262c45c8268b185aa759b0fe109df6950e609b120bff6a6c966 Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.556030 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.679367 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x"] Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.710025 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.801229 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" event={"ID":"d7fab196-9704-4991-8c67-8e0cadd2d4b5","Type":"ContainerStarted","Data":"d036397bc639a1cc5d66b01b3e3b6f592b7006bd4c52e90574704671a5fb15a9"} Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.802008 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l7psx" event={"ID":"0289a953-d506-42a8-89ff-fb018ab0d5cd","Type":"ContainerStarted","Data":"6392e59a81d87ba8d538bf8792b4598208b8f1bb7a1c10502330dc4b284e6684"} Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.802649 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" event={"ID":"7c673176-aa01-4a2f-b319-f81e28800e05","Type":"ContainerStarted","Data":"d856beb25e7df262c45c8268b185aa759b0fe109df6950e609b120bff6a6c966"} Jan 29 12:16:21 crc kubenswrapper[4660]: I0129 12:16:21.978287 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7458d57f55-l2wkb"] Jan 29 12:16:21 crc kubenswrapper[4660]: W0129 12:16:21.987409 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0283345_2f12_4c86_b70d_6165f1fd4041.slice/crio-060eab3ed70ffd1ec2b125e03cdb6fca8375e479f614cb3067ccdb70abb94ab7 WatchSource:0}: Error finding container 060eab3ed70ffd1ec2b125e03cdb6fca8375e479f614cb3067ccdb70abb94ab7: Status 404 returned error can't find the container with id 060eab3ed70ffd1ec2b125e03cdb6fca8375e479f614cb3067ccdb70abb94ab7 Jan 29 12:16:22 crc kubenswrapper[4660]: I0129 12:16:22.100208 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw"] Jan 29 12:16:22 crc kubenswrapper[4660]: W0129 12:16:22.101740 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87245509_8882_4337_88dc_9300b488472d.slice/crio-16d48978d1591fd487b5a3e9e258439a098537c73e61deac7855e97b65ade0e2 WatchSource:0}: Error finding container 16d48978d1591fd487b5a3e9e258439a098537c73e61deac7855e97b65ade0e2: Status 404 returned error can't find the container with id 16d48978d1591fd487b5a3e9e258439a098537c73e61deac7855e97b65ade0e2 Jan 29 12:16:22 crc kubenswrapper[4660]: I0129 12:16:22.811951 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" event={"ID":"87245509-8882-4337-88dc-9300b488472d","Type":"ContainerStarted","Data":"16d48978d1591fd487b5a3e9e258439a098537c73e61deac7855e97b65ade0e2"} Jan 29 12:16:22 crc kubenswrapper[4660]: I0129 12:16:22.815186 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7458d57f55-l2wkb" event={"ID":"a0283345-2f12-4c86-b70d-6165f1fd4041","Type":"ContainerStarted","Data":"1eb62a108837a30d6dae9fd2a0fedc6145c7e26047e1dba63c41ccaade78a487"} Jan 29 12:16:22 crc kubenswrapper[4660]: I0129 12:16:22.815227 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7458d57f55-l2wkb" event={"ID":"a0283345-2f12-4c86-b70d-6165f1fd4041","Type":"ContainerStarted","Data":"060eab3ed70ffd1ec2b125e03cdb6fca8375e479f614cb3067ccdb70abb94ab7"} Jan 29 12:16:22 crc kubenswrapper[4660]: I0129 12:16:22.841223 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7458d57f55-l2wkb" podStartSLOduration=1.841200079 podStartE2EDuration="1.841200079s" podCreationTimestamp="2026-01-29 12:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:16:22.832373694 +0000 UTC m=+620.055315846" watchObservedRunningTime="2026-01-29 12:16:22.841200079 +0000 UTC m=+620.064142211" Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.828867 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" event={"ID":"d7fab196-9704-4991-8c67-8e0cadd2d4b5","Type":"ContainerStarted","Data":"b377f645e8a5e67535bb01abbf1bab27cafda78b74e540bbceece9d2a53a2f4d"} Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.831187 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" event={"ID":"87245509-8882-4337-88dc-9300b488472d","Type":"ContainerStarted","Data":"9044a45d949e1abb7950ea65fe6132c51a04ec2267211a33a9ce223fe19b0eb9"} Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.831325 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.833173 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l7psx" event={"ID":"0289a953-d506-42a8-89ff-fb018ab0d5cd","Type":"ContainerStarted","Data":"9ff9d653f004520a54abc4a29301b95f1643d7be15e4b7a5f603f8cbfa3885b4"} Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.833860 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.836090 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" event={"ID":"7c673176-aa01-4a2f-b319-f81e28800e05","Type":"ContainerStarted","Data":"3eefbd7d22b6fc7c30772d4c9bdcacbde0efd3f1f1b97d78271368b2ebbf06c3"} Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.864595 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-q564x" podStartSLOduration=2.37867927 podStartE2EDuration="4.864575917s" podCreationTimestamp="2026-01-29 12:16:20 +0000 UTC" firstStartedPulling="2026-01-29 12:16:21.69196462 +0000 UTC m=+618.914906742" lastFinishedPulling="2026-01-29 12:16:24.177861247 +0000 UTC m=+621.400803389" observedRunningTime="2026-01-29 12:16:24.84635165 +0000 UTC m=+622.069293812" watchObservedRunningTime="2026-01-29 12:16:24.864575917 +0000 UTC m=+622.087518049" Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.872437 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-l7psx" podStartSLOduration=1.9026934519999998 podStartE2EDuration="4.872418394s" podCreationTimestamp="2026-01-29 12:16:20 +0000 UTC" firstStartedPulling="2026-01-29 12:16:21.211888773 +0000 UTC m=+618.434830905" lastFinishedPulling="2026-01-29 12:16:24.181613705 +0000 UTC m=+621.404555847" observedRunningTime="2026-01-29 12:16:24.864006721 +0000 UTC m=+622.086948853" watchObservedRunningTime="2026-01-29 12:16:24.872418394 +0000 UTC m=+622.095360526" Jan 29 12:16:24 crc kubenswrapper[4660]: I0129 12:16:24.890752 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" podStartSLOduration=2.810587764 podStartE2EDuration="4.890730633s" podCreationTimestamp="2026-01-29 12:16:20 +0000 UTC" firstStartedPulling="2026-01-29 12:16:22.103635369 +0000 UTC m=+619.326577501" lastFinishedPulling="2026-01-29 12:16:24.183778238 +0000 UTC m=+621.406720370" observedRunningTime="2026-01-29 12:16:24.882600988 +0000 UTC m=+622.105543130" watchObservedRunningTime="2026-01-29 12:16:24.890730633 +0000 UTC m=+622.113672775" Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.268976 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.269454 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.269510 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.270237 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.270306 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734" gracePeriod=600 Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.847492 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" event={"ID":"7c673176-aa01-4a2f-b319-f81e28800e05","Type":"ContainerStarted","Data":"a95e1520e8dd02f1715c3242359c2698192780a870a3c585d39d6ca21cda6862"} Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.853209 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734" exitCode=0 Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.853261 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734"} Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.853306 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5"} Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.853323 4660 scope.go:117] "RemoveContainer" containerID="eff0713f6bf872ade78cf9d0d16ea58cba32680112838028abfc718ba0e896cc" Jan 29 12:16:26 crc kubenswrapper[4660]: I0129 12:16:26.895635 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-5qnsn" podStartSLOduration=2.2581395459999998 podStartE2EDuration="6.895616896s" podCreationTimestamp="2026-01-29 12:16:20 +0000 UTC" firstStartedPulling="2026-01-29 12:16:21.530663897 +0000 UTC m=+618.753606029" lastFinishedPulling="2026-01-29 12:16:26.168141247 +0000 UTC m=+623.391083379" observedRunningTime="2026-01-29 12:16:26.873019783 +0000 UTC m=+624.095961915" watchObservedRunningTime="2026-01-29 12:16:26.895616896 +0000 UTC m=+624.118559028" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.181453 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-l7psx" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.559009 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.559057 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.563143 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.888052 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7458d57f55-l2wkb" Jan 29 12:16:31 crc kubenswrapper[4660]: I0129 12:16:31.945903 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:16:41 crc kubenswrapper[4660]: I0129 12:16:41.720490 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-j8hgw" Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.738439 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql"] Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.740674 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.742462 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.748830 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql"] Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.916400 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.916599 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pkn\" (UniqueName: \"kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:54 crc kubenswrapper[4660]: I0129 12:16:54.916779 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.018586 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5pkn\" (UniqueName: \"kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.018759 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.018822 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.019372 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.019405 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.046213 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5pkn\" (UniqueName: \"kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.181105 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:16:55 crc kubenswrapper[4660]: I0129 12:16:55.615279 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql"] Jan 29 12:16:56 crc kubenswrapper[4660]: I0129 12:16:56.037303 4660 generic.go:334] "Generic (PLEG): container finished" podID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerID="c9541fae16972acde877aca3962cea0b27ff269a9a656a475cbb9e3171c9bd34" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4660]: I0129 12:16:56.037391 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" event={"ID":"5cca2967-8925-4c9d-8e9f-8912305e7163","Type":"ContainerDied","Data":"c9541fae16972acde877aca3962cea0b27ff269a9a656a475cbb9e3171c9bd34"} Jan 29 12:16:56 crc kubenswrapper[4660]: I0129 12:16:56.037635 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" event={"ID":"5cca2967-8925-4c9d-8e9f-8912305e7163","Type":"ContainerStarted","Data":"bb10d77cf4e3f0e0b411bf3b593d578885b2be6dd7e68585cc58f32a923183e0"} Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.000053 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tvjqj" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" containerID="cri-o://7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32" gracePeriod=15 Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.521043 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tvjqj_b177214f-7d4c-4f4f-8741-3a2695d1c495/console/0.log" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.521403 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.649901 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.649996 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650047 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650092 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650125 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650154 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650174 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8f8\" (UniqueName: \"kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8\") pod \"b177214f-7d4c-4f4f-8741-3a2695d1c495\" (UID: \"b177214f-7d4c-4f4f-8741-3a2695d1c495\") " Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650859 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config" (OuterVolumeSpecName: "console-config") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.650888 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.652010 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.652053 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca" (OuterVolumeSpecName: "service-ca") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.658538 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.659598 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.659617 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8" (OuterVolumeSpecName: "kube-api-access-zg8f8") pod "b177214f-7d4c-4f4f-8741-3a2695d1c495" (UID: "b177214f-7d4c-4f4f-8741-3a2695d1c495"). InnerVolumeSpecName "kube-api-access-zg8f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.751973 4660 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752029 4660 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752040 4660 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752049 4660 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752059 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg8f8\" (UniqueName: \"kubernetes.io/projected/b177214f-7d4c-4f4f-8741-3a2695d1c495-kube-api-access-zg8f8\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752069 4660 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4660]: I0129 12:16:57.752077 4660 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b177214f-7d4c-4f4f-8741-3a2695d1c495-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.051950 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tvjqj_b177214f-7d4c-4f4f-8741-3a2695d1c495/console/0.log" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.052021 4660 generic.go:334] "Generic (PLEG): container finished" podID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerID="7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32" exitCode=2 Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.052063 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tvjqj" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.052105 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvjqj" event={"ID":"b177214f-7d4c-4f4f-8741-3a2695d1c495","Type":"ContainerDied","Data":"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32"} Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.052140 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tvjqj" event={"ID":"b177214f-7d4c-4f4f-8741-3a2695d1c495","Type":"ContainerDied","Data":"312f2a0a6faf6dc5eb08908b173df3c71af0bef6bd831deb9a5b2fc17159bd7f"} Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.052167 4660 scope.go:117] "RemoveContainer" containerID="7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.055279 4660 generic.go:334] "Generic (PLEG): container finished" podID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerID="37ef1ab182358011ebc85e0884b2f8979564c4bd2465f58156a4192d549b5555" exitCode=0 Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.055436 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" event={"ID":"5cca2967-8925-4c9d-8e9f-8912305e7163","Type":"ContainerDied","Data":"37ef1ab182358011ebc85e0884b2f8979564c4bd2465f58156a4192d549b5555"} Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.099880 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.101802 4660 scope.go:117] "RemoveContainer" containerID="7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32" Jan 29 12:16:58 crc kubenswrapper[4660]: E0129 12:16:58.102321 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32\": container with ID starting with 7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32 not found: ID does not exist" containerID="7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.102356 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32"} err="failed to get container status \"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32\": rpc error: code = NotFound desc = could not find container \"7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32\": container with ID starting with 7cdd867575a3582a6e0d424c110a2aef9e6fe40926fbcade0402566adb91ab32 not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4660]: I0129 12:16:58.103556 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tvjqj"] Jan 29 12:16:59 crc kubenswrapper[4660]: I0129 12:16:59.063849 4660 generic.go:334] "Generic (PLEG): container finished" podID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerID="454633002bd4a9c658f536bb83bcbd0736d69377471233f6157ed4a6fa6278f3" exitCode=0 Jan 29 12:16:59 crc kubenswrapper[4660]: I0129 12:16:59.063969 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" event={"ID":"5cca2967-8925-4c9d-8e9f-8912305e7163","Type":"ContainerDied","Data":"454633002bd4a9c658f536bb83bcbd0736d69377471233f6157ed4a6fa6278f3"} Jan 29 12:16:59 crc kubenswrapper[4660]: I0129 12:16:59.479428 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" path="/var/lib/kubelet/pods/b177214f-7d4c-4f4f-8741-3a2695d1c495/volumes" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.304932 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.490608 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util\") pod \"5cca2967-8925-4c9d-8e9f-8912305e7163\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.490738 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5pkn\" (UniqueName: \"kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn\") pod \"5cca2967-8925-4c9d-8e9f-8912305e7163\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.490772 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle\") pod \"5cca2967-8925-4c9d-8e9f-8912305e7163\" (UID: \"5cca2967-8925-4c9d-8e9f-8912305e7163\") " Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.491766 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle" (OuterVolumeSpecName: "bundle") pod "5cca2967-8925-4c9d-8e9f-8912305e7163" (UID: "5cca2967-8925-4c9d-8e9f-8912305e7163"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.496912 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn" (OuterVolumeSpecName: "kube-api-access-w5pkn") pod "5cca2967-8925-4c9d-8e9f-8912305e7163" (UID: "5cca2967-8925-4c9d-8e9f-8912305e7163"). InnerVolumeSpecName "kube-api-access-w5pkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.505339 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util" (OuterVolumeSpecName: "util") pod "5cca2967-8925-4c9d-8e9f-8912305e7163" (UID: "5cca2967-8925-4c9d-8e9f-8912305e7163"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.592086 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.592128 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5pkn\" (UniqueName: \"kubernetes.io/projected/5cca2967-8925-4c9d-8e9f-8912305e7163-kube-api-access-w5pkn\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:00 crc kubenswrapper[4660]: I0129 12:17:00.592140 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5cca2967-8925-4c9d-8e9f-8912305e7163-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:01 crc kubenswrapper[4660]: I0129 12:17:01.080406 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" event={"ID":"5cca2967-8925-4c9d-8e9f-8912305e7163","Type":"ContainerDied","Data":"bb10d77cf4e3f0e0b411bf3b593d578885b2be6dd7e68585cc58f32a923183e0"} Jan 29 12:17:01 crc kubenswrapper[4660]: I0129 12:17:01.080446 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb10d77cf4e3f0e0b411bf3b593d578885b2be6dd7e68585cc58f32a923183e0" Jan 29 12:17:01 crc kubenswrapper[4660]: I0129 12:17:01.080538 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.608142 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck"] Jan 29 12:17:09 crc kubenswrapper[4660]: E0129 12:17:09.610588 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="pull" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.610683 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="pull" Jan 29 12:17:09 crc kubenswrapper[4660]: E0129 12:17:09.610913 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="util" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.610990 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="util" Jan 29 12:17:09 crc kubenswrapper[4660]: E0129 12:17:09.611067 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.611133 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" Jan 29 12:17:09 crc kubenswrapper[4660]: E0129 12:17:09.611209 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="extract" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.611273 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="extract" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.611457 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cca2967-8925-4c9d-8e9f-8912305e7163" containerName="extract" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.611541 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="b177214f-7d4c-4f4f-8741-3a2695d1c495" containerName="console" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.612038 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.617055 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.617980 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.618433 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.618615 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.619178 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tg5wm" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.705634 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck"] Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.709353 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-webhook-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.709480 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdlb\" (UniqueName: \"kubernetes.io/projected/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-kube-api-access-wvdlb\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.709555 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-apiservice-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.810639 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-webhook-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.810960 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvdlb\" (UniqueName: \"kubernetes.io/projected/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-kube-api-access-wvdlb\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.811068 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-apiservice-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.818860 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-webhook-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.823305 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-apiservice-cert\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.855517 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvdlb\" (UniqueName: \"kubernetes.io/projected/3d7688d4-6ab1-40b9-aadc-08ca5bb4be13-kube-api-access-wvdlb\") pod \"metallb-operator-controller-manager-b8955cf6-tjmck\" (UID: \"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13\") " pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:09 crc kubenswrapper[4660]: I0129 12:17:09.932484 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.103201 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4"] Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.116665 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.120759 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4"] Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.121539 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.121587 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v8xrw" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.121800 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.219139 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsjpj\" (UniqueName: \"kubernetes.io/projected/2a0050d9-566a-4127-ae73-093fe7fcef53-kube-api-access-tsjpj\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.219197 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-apiservice-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.219268 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-webhook-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.311868 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck"] Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.320480 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsjpj\" (UniqueName: \"kubernetes.io/projected/2a0050d9-566a-4127-ae73-093fe7fcef53-kube-api-access-tsjpj\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.320534 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-apiservice-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.320604 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-webhook-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.329484 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-apiservice-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.330538 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a0050d9-566a-4127-ae73-093fe7fcef53-webhook-cert\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.348289 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsjpj\" (UniqueName: \"kubernetes.io/projected/2a0050d9-566a-4127-ae73-093fe7fcef53-kube-api-access-tsjpj\") pod \"metallb-operator-webhook-server-5964957fbf-px6c4\" (UID: \"2a0050d9-566a-4127-ae73-093fe7fcef53\") " pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.458246 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:10 crc kubenswrapper[4660]: I0129 12:17:10.732785 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4"] Jan 29 12:17:10 crc kubenswrapper[4660]: W0129 12:17:10.741737 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a0050d9_566a_4127_ae73_093fe7fcef53.slice/crio-358fb0c78e1e6f4373577a46aa391e47d2a86135d4639425b4bef4054f18809e WatchSource:0}: Error finding container 358fb0c78e1e6f4373577a46aa391e47d2a86135d4639425b4bef4054f18809e: Status 404 returned error can't find the container with id 358fb0c78e1e6f4373577a46aa391e47d2a86135d4639425b4bef4054f18809e Jan 29 12:17:11 crc kubenswrapper[4660]: I0129 12:17:11.152752 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" event={"ID":"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13","Type":"ContainerStarted","Data":"845c5eebd29e7e976e912f2e5d5a0cae54c7a5460fef51d4de31e00736d653f2"} Jan 29 12:17:11 crc kubenswrapper[4660]: I0129 12:17:11.157319 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" event={"ID":"2a0050d9-566a-4127-ae73-093fe7fcef53","Type":"ContainerStarted","Data":"358fb0c78e1e6f4373577a46aa391e47d2a86135d4639425b4bef4054f18809e"} Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.193001 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" event={"ID":"2a0050d9-566a-4127-ae73-093fe7fcef53","Type":"ContainerStarted","Data":"7b5b9a361bba5d911fc37f59ee34591eca491dae85d890c4673054f9eea2cb91"} Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.194623 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" event={"ID":"3d7688d4-6ab1-40b9-aadc-08ca5bb4be13","Type":"ContainerStarted","Data":"a5263497c6b4dbf4bcaed34557224a6d0a4a6d092269b39b4d33307e07eee292"} Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.194787 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.194938 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.235289 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" podStartSLOduration=1.9245845799999999 podStartE2EDuration="8.23527552s" podCreationTimestamp="2026-01-29 12:17:09 +0000 UTC" firstStartedPulling="2026-01-29 12:17:10.334067647 +0000 UTC m=+667.557009779" lastFinishedPulling="2026-01-29 12:17:16.644758587 +0000 UTC m=+673.867700719" observedRunningTime="2026-01-29 12:17:17.233227702 +0000 UTC m=+674.456169834" watchObservedRunningTime="2026-01-29 12:17:17.23527552 +0000 UTC m=+674.458217652" Jan 29 12:17:17 crc kubenswrapper[4660]: I0129 12:17:17.236356 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" podStartSLOduration=1.313488498 podStartE2EDuration="7.236350971s" podCreationTimestamp="2026-01-29 12:17:10 +0000 UTC" firstStartedPulling="2026-01-29 12:17:10.745580955 +0000 UTC m=+667.968523087" lastFinishedPulling="2026-01-29 12:17:16.668443428 +0000 UTC m=+673.891385560" observedRunningTime="2026-01-29 12:17:17.218083563 +0000 UTC m=+674.441025695" watchObservedRunningTime="2026-01-29 12:17:17.236350971 +0000 UTC m=+674.459293103" Jan 29 12:17:30 crc kubenswrapper[4660]: I0129 12:17:30.465735 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5964957fbf-px6c4" Jan 29 12:17:49 crc kubenswrapper[4660]: I0129 12:17:49.935019 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b8955cf6-tjmck" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.676482 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.677660 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.681066 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.682251 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z2bhd"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.682490 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zp6xw" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.685178 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.688744 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.688905 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.700425 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.706316 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e13bbd49-3f1c-4235-988b-001247a4f125-frr-startup\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.706644 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcwx8\" (UniqueName: \"kubernetes.io/projected/e13bbd49-3f1c-4235-988b-001247a4f125-kube-api-access-qcwx8\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.706820 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-conf\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.706938 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-metrics\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.717554 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.717861 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.717988 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-reloader\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.718091 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpvf\" (UniqueName: \"kubernetes.io/projected/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-kube-api-access-4lpvf\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.718239 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-sockets\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.781817 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m8vmm"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.783174 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.794716 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.794718 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.794861 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.796144 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-h22n6" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.812878 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-mpn6c"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.813704 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.815681 4660 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819160 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-conf\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819194 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-metrics\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819215 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819235 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819257 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-reloader\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819276 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819291 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkq6n\" (UniqueName: \"kubernetes.io/projected/325bb691-ed31-439a-8a6c-b244152fce18-kube-api-access-zkq6n\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819310 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lpvf\" (UniqueName: \"kubernetes.io/projected/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-kube-api-access-4lpvf\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819341 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-sockets\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819362 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e13bbd49-3f1c-4235-988b-001247a4f125-frr-startup\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819386 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819401 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcwx8\" (UniqueName: \"kubernetes.io/projected/e13bbd49-3f1c-4235-988b-001247a4f125-kube-api-access-qcwx8\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819421 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/325bb691-ed31-439a-8a6c-b244152fce18-metallb-excludel2\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.819861 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-conf\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.822122 4660 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.822230 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs podName:e13bbd49-3f1c-4235-988b-001247a4f125 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:51.322194287 +0000 UTC m=+708.545136419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs") pod "frr-k8s-z2bhd" (UID: "e13bbd49-3f1c-4235-988b-001247a4f125") : secret "frr-k8s-certs-secret" not found Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.825005 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-reloader\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.825220 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-metrics\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.825309 4660 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.825360 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert podName:67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:51.325344076 +0000 UTC m=+708.548286208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert") pod "frr-k8s-webhook-server-7df86c4f6c-x2p5k" (UID: "67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4") : secret "frr-k8s-webhook-server-cert" not found Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.826048 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e13bbd49-3f1c-4235-988b-001247a4f125-frr-sockets\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.826377 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e13bbd49-3f1c-4235-988b-001247a4f125-frr-startup\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.858785 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mpn6c"] Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.865743 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lpvf\" (UniqueName: \"kubernetes.io/projected/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-kube-api-access-4lpvf\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.888805 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcwx8\" (UniqueName: \"kubernetes.io/projected/e13bbd49-3f1c-4235-988b-001247a4f125-kube-api-access-qcwx8\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921067 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/325bb691-ed31-439a-8a6c-b244152fce18-metallb-excludel2\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921160 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921187 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg759\" (UniqueName: \"kubernetes.io/projected/2d971de7-678b-494a-b438-20dfd769dec8-kube-api-access-gg759\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921219 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921238 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkq6n\" (UniqueName: \"kubernetes.io/projected/325bb691-ed31-439a-8a6c-b244152fce18-kube-api-access-zkq6n\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921258 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-cert\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.921315 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.921436 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.921494 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist podName:325bb691-ed31-439a-8a6c-b244152fce18 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:51.421475242 +0000 UTC m=+708.644417374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist") pod "speaker-m8vmm" (UID: "325bb691-ed31-439a-8a6c-b244152fce18") : secret "metallb-memberlist" not found Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.921777 4660 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 12:17:50 crc kubenswrapper[4660]: E0129 12:17:50.921902 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs podName:325bb691-ed31-439a-8a6c-b244152fce18 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:51.421890043 +0000 UTC m=+708.644832175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs") pod "speaker-m8vmm" (UID: "325bb691-ed31-439a-8a6c-b244152fce18") : secret "speaker-certs-secret" not found Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.922005 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/325bb691-ed31-439a-8a6c-b244152fce18-metallb-excludel2\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:50 crc kubenswrapper[4660]: I0129 12:17:50.944232 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkq6n\" (UniqueName: \"kubernetes.io/projected/325bb691-ed31-439a-8a6c-b244152fce18-kube-api-access-zkq6n\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.022944 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.022992 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg759\" (UniqueName: \"kubernetes.io/projected/2d971de7-678b-494a-b438-20dfd769dec8-kube-api-access-gg759\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.023034 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-cert\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: E0129 12:17:51.023110 4660 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 12:17:51 crc kubenswrapper[4660]: E0129 12:17:51.023198 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs podName:2d971de7-678b-494a-b438-20dfd769dec8 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:51.523174585 +0000 UTC m=+708.746116787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs") pod "controller-6968d8fdc4-mpn6c" (UID: "2d971de7-678b-494a-b438-20dfd769dec8") : secret "controller-certs-secret" not found Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.026169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-cert\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.039719 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg759\" (UniqueName: \"kubernetes.io/projected/2d971de7-678b-494a-b438-20dfd769dec8-kube-api-access-gg759\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.327211 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.327255 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.330163 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e13bbd49-3f1c-4235-988b-001247a4f125-metrics-certs\") pod \"frr-k8s-z2bhd\" (UID: \"e13bbd49-3f1c-4235-988b-001247a4f125\") " pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.330327 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-x2p5k\" (UID: \"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.427974 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.428047 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:51 crc kubenswrapper[4660]: E0129 12:17:51.428184 4660 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 12:17:51 crc kubenswrapper[4660]: E0129 12:17:51.428250 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist podName:325bb691-ed31-439a-8a6c-b244152fce18 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:52.42823014 +0000 UTC m=+709.651172272 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist") pod "speaker-m8vmm" (UID: "325bb691-ed31-439a-8a6c-b244152fce18") : secret "metallb-memberlist" not found Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.432001 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-metrics-certs\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.529180 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.532230 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d971de7-678b-494a-b438-20dfd769dec8-metrics-certs\") pod \"controller-6968d8fdc4-mpn6c\" (UID: \"2d971de7-678b-494a-b438-20dfd769dec8\") " pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.594281 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.607620 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.757407 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:51 crc kubenswrapper[4660]: I0129 12:17:51.974168 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-mpn6c"] Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.053093 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k"] Jan 29 12:17:52 crc kubenswrapper[4660]: W0129 12:17:52.054404 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67ec6fe2_fa0d_4c21_a12b_79e2a2e4d9b4.slice/crio-f451de69eb41defe752e8c570d744ef958a3b0bd1ebd505dc921dbf41e9c3ea3 WatchSource:0}: Error finding container f451de69eb41defe752e8c570d744ef958a3b0bd1ebd505dc921dbf41e9c3ea3: Status 404 returned error can't find the container with id f451de69eb41defe752e8c570d744ef958a3b0bd1ebd505dc921dbf41e9c3ea3 Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.397298 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mpn6c" event={"ID":"2d971de7-678b-494a-b438-20dfd769dec8","Type":"ContainerStarted","Data":"fd508c2a37f8df239c6917b2750c8262c86c3307f13a100cda86540faa9e68ec"} Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.397350 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mpn6c" event={"ID":"2d971de7-678b-494a-b438-20dfd769dec8","Type":"ContainerStarted","Data":"eaf3710bfef5b0572b2611937c651e60abd2e399920df7c549a2ffb3d111c98f"} Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.397362 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-mpn6c" event={"ID":"2d971de7-678b-494a-b438-20dfd769dec8","Type":"ContainerStarted","Data":"5e436a5eb714b703a0ef81e8099bf13f74ba688a63b1472f47af551807025a87"} Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.398450 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" event={"ID":"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4","Type":"ContainerStarted","Data":"f451de69eb41defe752e8c570d744ef958a3b0bd1ebd505dc921dbf41e9c3ea3"} Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.399474 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"24d03cf933378cefdc2fbda353b43f02f97216506894ad7f7449d44899d3d0d3"} Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.420366 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-mpn6c" podStartSLOduration=2.420349229 podStartE2EDuration="2.420349229s" podCreationTimestamp="2026-01-29 12:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:17:52.413385852 +0000 UTC m=+709.636327984" watchObservedRunningTime="2026-01-29 12:17:52.420349229 +0000 UTC m=+709.643291361" Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.443892 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.454948 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/325bb691-ed31-439a-8a6c-b244152fce18-memberlist\") pod \"speaker-m8vmm\" (UID: \"325bb691-ed31-439a-8a6c-b244152fce18\") " pod="metallb-system/speaker-m8vmm" Jan 29 12:17:52 crc kubenswrapper[4660]: I0129 12:17:52.595636 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m8vmm" Jan 29 12:17:52 crc kubenswrapper[4660]: W0129 12:17:52.610224 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod325bb691_ed31_439a_8a6c_b244152fce18.slice/crio-4dd5a0a95c6a524d54fca6a6774728233f8cc1136f12d08d1b9a9edb938cec14 WatchSource:0}: Error finding container 4dd5a0a95c6a524d54fca6a6774728233f8cc1136f12d08d1b9a9edb938cec14: Status 404 returned error can't find the container with id 4dd5a0a95c6a524d54fca6a6774728233f8cc1136f12d08d1b9a9edb938cec14 Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.406932 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m8vmm" event={"ID":"325bb691-ed31-439a-8a6c-b244152fce18","Type":"ContainerStarted","Data":"e96f0c044c6306cd7471fc60d62dfe37e031e0828d35252c77ef42493648cdfe"} Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.406970 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m8vmm" event={"ID":"325bb691-ed31-439a-8a6c-b244152fce18","Type":"ContainerStarted","Data":"e84a5336a3b1a7daccc56c5c9cf8185a9a0e1874d90eaa57e43fd72a5cd7801e"} Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.406979 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m8vmm" event={"ID":"325bb691-ed31-439a-8a6c-b244152fce18","Type":"ContainerStarted","Data":"4dd5a0a95c6a524d54fca6a6774728233f8cc1136f12d08d1b9a9edb938cec14"} Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.407104 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.407155 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m8vmm" Jan 29 12:17:53 crc kubenswrapper[4660]: I0129 12:17:53.430142 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m8vmm" podStartSLOduration=3.4301233 podStartE2EDuration="3.4301233s" podCreationTimestamp="2026-01-29 12:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:17:53.425625773 +0000 UTC m=+710.648567915" watchObservedRunningTime="2026-01-29 12:17:53.4301233 +0000 UTC m=+710.653065432" Jan 29 12:18:00 crc kubenswrapper[4660]: I0129 12:18:00.457302 4660 generic.go:334] "Generic (PLEG): container finished" podID="e13bbd49-3f1c-4235-988b-001247a4f125" containerID="16cb15cdaafe79855b3af68392bd3e788631b25f53ec0fa1ca7b42e2771cc52b" exitCode=0 Jan 29 12:18:00 crc kubenswrapper[4660]: I0129 12:18:00.457336 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerDied","Data":"16cb15cdaafe79855b3af68392bd3e788631b25f53ec0fa1ca7b42e2771cc52b"} Jan 29 12:18:00 crc kubenswrapper[4660]: I0129 12:18:00.459363 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" event={"ID":"67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4","Type":"ContainerStarted","Data":"f60382cd6af3f9e2cfb855a5cf2e4bad23c393693d83eb73f94bc2ededb657a9"} Jan 29 12:18:00 crc kubenswrapper[4660]: I0129 12:18:00.459453 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:18:00 crc kubenswrapper[4660]: I0129 12:18:00.522210 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" podStartSLOduration=2.7279207420000002 podStartE2EDuration="10.522191663s" podCreationTimestamp="2026-01-29 12:17:50 +0000 UTC" firstStartedPulling="2026-01-29 12:17:52.056731329 +0000 UTC m=+709.279673461" lastFinishedPulling="2026-01-29 12:17:59.85100225 +0000 UTC m=+717.073944382" observedRunningTime="2026-01-29 12:18:00.518036908 +0000 UTC m=+717.740979060" watchObservedRunningTime="2026-01-29 12:18:00.522191663 +0000 UTC m=+717.745133805" Jan 29 12:18:01 crc kubenswrapper[4660]: I0129 12:18:01.465521 4660 generic.go:334] "Generic (PLEG): container finished" podID="e13bbd49-3f1c-4235-988b-001247a4f125" containerID="b61cd6becf93e15d677095b18f747950a5825307a610fd2fe1f529f64a446d46" exitCode=0 Jan 29 12:18:01 crc kubenswrapper[4660]: I0129 12:18:01.465611 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerDied","Data":"b61cd6becf93e15d677095b18f747950a5825307a610fd2fe1f529f64a446d46"} Jan 29 12:18:02 crc kubenswrapper[4660]: I0129 12:18:02.473528 4660 generic.go:334] "Generic (PLEG): container finished" podID="e13bbd49-3f1c-4235-988b-001247a4f125" containerID="5ff95f5519602e4abdfd7198a9bd96cec77b1a1a8adfbda745d4571c5c800848" exitCode=0 Jan 29 12:18:02 crc kubenswrapper[4660]: I0129 12:18:02.473815 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerDied","Data":"5ff95f5519602e4abdfd7198a9bd96cec77b1a1a8adfbda745d4571c5c800848"} Jan 29 12:18:02 crc kubenswrapper[4660]: I0129 12:18:02.604742 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m8vmm" Jan 29 12:18:03 crc kubenswrapper[4660]: I0129 12:18:03.488003 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"fab1914d067c03e1f55e1effb96f86df6779ab34d4b185cd62f931c2910ed7a7"} Jan 29 12:18:03 crc kubenswrapper[4660]: I0129 12:18:03.488284 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"687c2a5886b13439d63501795fe447ebc5b10bcfb24387d0f399311728547000"} Jan 29 12:18:03 crc kubenswrapper[4660]: I0129 12:18:03.488305 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"db0ad27957c19e27dbff721a687b757e6a08f25bb764662a27a3606afe18380f"} Jan 29 12:18:03 crc kubenswrapper[4660]: I0129 12:18:03.488314 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"b634f7b790648593b9826c5e7a4bb599650acdca8b4889d9654b8788ae142ef7"} Jan 29 12:18:03 crc kubenswrapper[4660]: I0129 12:18:03.488324 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"8d195fa9965a2e31f2338543ee11e8373b697fa28bfbf77e142d27d18a4d1af9"} Jan 29 12:18:04 crc kubenswrapper[4660]: I0129 12:18:04.499676 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z2bhd" event={"ID":"e13bbd49-3f1c-4235-988b-001247a4f125","Type":"ContainerStarted","Data":"ede45bb309ed4cdbbe5fb94e2da8d2b931ea4d64ca8622bbfea6f3415a41f63c"} Jan 29 12:18:04 crc kubenswrapper[4660]: I0129 12:18:04.500160 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:18:04 crc kubenswrapper[4660]: I0129 12:18:04.527678 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z2bhd" podStartSLOduration=6.490682809 podStartE2EDuration="14.527655939s" podCreationTimestamp="2026-01-29 12:17:50 +0000 UTC" firstStartedPulling="2026-01-29 12:17:51.781146396 +0000 UTC m=+709.004088528" lastFinishedPulling="2026-01-29 12:17:59.818119526 +0000 UTC m=+717.041061658" observedRunningTime="2026-01-29 12:18:04.524160192 +0000 UTC m=+721.747102344" watchObservedRunningTime="2026-01-29 12:18:04.527655939 +0000 UTC m=+721.750598071" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.438904 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.439770 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.442818 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wnfs8" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.443410 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.444746 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.465131 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.616421 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dwfs\" (UniqueName: \"kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs\") pod \"openstack-operator-index-mf4nl\" (UID: \"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b\") " pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.717502 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dwfs\" (UniqueName: \"kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs\") pod \"openstack-operator-index-mf4nl\" (UID: \"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b\") " pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.738081 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dwfs\" (UniqueName: \"kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs\") pod \"openstack-operator-index-mf4nl\" (UID: \"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b\") " pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:05 crc kubenswrapper[4660]: I0129 12:18:05.756591 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:06 crc kubenswrapper[4660]: I0129 12:18:06.212983 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:06 crc kubenswrapper[4660]: I0129 12:18:06.512928 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mf4nl" event={"ID":"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b","Type":"ContainerStarted","Data":"525f4fdefa6696d88b6b5230986e37e449ce078d067edf651286531ce94566e1"} Jan 29 12:18:06 crc kubenswrapper[4660]: I0129 12:18:06.608466 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:18:06 crc kubenswrapper[4660]: I0129 12:18:06.665440 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.219412 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.834945 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jh4bg"] Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.835630 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.851710 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jh4bg"] Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.865627 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtrr\" (UniqueName: \"kubernetes.io/projected/90b0871c-024f-4dbf-8741-a22dd98b1a5c-kube-api-access-cbtrr\") pod \"openstack-operator-index-jh4bg\" (UID: \"90b0871c-024f-4dbf-8741-a22dd98b1a5c\") " pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.969566 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtrr\" (UniqueName: \"kubernetes.io/projected/90b0871c-024f-4dbf-8741-a22dd98b1a5c-kube-api-access-cbtrr\") pod \"openstack-operator-index-jh4bg\" (UID: \"90b0871c-024f-4dbf-8741-a22dd98b1a5c\") " pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:08 crc kubenswrapper[4660]: I0129 12:18:08.989270 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtrr\" (UniqueName: \"kubernetes.io/projected/90b0871c-024f-4dbf-8741-a22dd98b1a5c-kube-api-access-cbtrr\") pod \"openstack-operator-index-jh4bg\" (UID: \"90b0871c-024f-4dbf-8741-a22dd98b1a5c\") " pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:09 crc kubenswrapper[4660]: I0129 12:18:09.167118 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:09 crc kubenswrapper[4660]: W0129 12:18:09.851711 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b0871c_024f_4dbf_8741_a22dd98b1a5c.slice/crio-b104ab14493789b4a9b2c8516ca46f3f3a9ec7d6a915dc2d452f647eee0db6f4 WatchSource:0}: Error finding container b104ab14493789b4a9b2c8516ca46f3f3a9ec7d6a915dc2d452f647eee0db6f4: Status 404 returned error can't find the container with id b104ab14493789b4a9b2c8516ca46f3f3a9ec7d6a915dc2d452f647eee0db6f4 Jan 29 12:18:09 crc kubenswrapper[4660]: I0129 12:18:09.852493 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jh4bg"] Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.561009 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mf4nl" event={"ID":"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b","Type":"ContainerStarted","Data":"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85"} Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.561152 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mf4nl" podUID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" containerName="registry-server" containerID="cri-o://d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85" gracePeriod=2 Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.564489 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jh4bg" event={"ID":"90b0871c-024f-4dbf-8741-a22dd98b1a5c","Type":"ContainerStarted","Data":"8f37860a6aa2eda957b625cf71d5fdff3dde0d68c3abd6a846f91f602e59118e"} Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.564537 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jh4bg" event={"ID":"90b0871c-024f-4dbf-8741-a22dd98b1a5c","Type":"ContainerStarted","Data":"b104ab14493789b4a9b2c8516ca46f3f3a9ec7d6a915dc2d452f647eee0db6f4"} Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.586137 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mf4nl" podStartSLOduration=2.081345358 podStartE2EDuration="5.586119698s" podCreationTimestamp="2026-01-29 12:18:05 +0000 UTC" firstStartedPulling="2026-01-29 12:18:06.230248005 +0000 UTC m=+723.453190137" lastFinishedPulling="2026-01-29 12:18:09.735022345 +0000 UTC m=+726.957964477" observedRunningTime="2026-01-29 12:18:10.579823423 +0000 UTC m=+727.802765545" watchObservedRunningTime="2026-01-29 12:18:10.586119698 +0000 UTC m=+727.809061830" Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.611707 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jh4bg" podStartSLOduration=2.563246772 podStartE2EDuration="2.611672088s" podCreationTimestamp="2026-01-29 12:18:08 +0000 UTC" firstStartedPulling="2026-01-29 12:18:09.856191792 +0000 UTC m=+727.079133924" lastFinishedPulling="2026-01-29 12:18:09.904617118 +0000 UTC m=+727.127559240" observedRunningTime="2026-01-29 12:18:10.607927064 +0000 UTC m=+727.830869266" watchObservedRunningTime="2026-01-29 12:18:10.611672088 +0000 UTC m=+727.834614220" Jan 29 12:18:10 crc kubenswrapper[4660]: I0129 12:18:10.893268 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.052910 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dwfs\" (UniqueName: \"kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs\") pod \"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b\" (UID: \"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b\") " Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.057968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs" (OuterVolumeSpecName: "kube-api-access-8dwfs") pod "534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" (UID: "534c1dd0-efe9-4f5e-8132-a907f7b2bd0b"). InnerVolumeSpecName "kube-api-access-8dwfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.154074 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dwfs\" (UniqueName: \"kubernetes.io/projected/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b-kube-api-access-8dwfs\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.571531 4660 generic.go:334] "Generic (PLEG): container finished" podID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" containerID="d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85" exitCode=0 Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.571822 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mf4nl" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.572180 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mf4nl" event={"ID":"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b","Type":"ContainerDied","Data":"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85"} Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.572221 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mf4nl" event={"ID":"534c1dd0-efe9-4f5e-8132-a907f7b2bd0b","Type":"ContainerDied","Data":"525f4fdefa6696d88b6b5230986e37e449ce078d067edf651286531ce94566e1"} Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.572244 4660 scope.go:117] "RemoveContainer" containerID="d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.591963 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.593444 4660 scope.go:117] "RemoveContainer" containerID="d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85" Jan 29 12:18:11 crc kubenswrapper[4660]: E0129 12:18:11.593831 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85\": container with ID starting with d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85 not found: ID does not exist" containerID="d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.593861 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85"} err="failed to get container status \"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85\": rpc error: code = NotFound desc = could not find container \"d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85\": container with ID starting with d789a487a838b0935d6f4c5c0edfcaf7d68fbf19e88c8cd57f8444946fad1a85 not found: ID does not exist" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.599126 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mf4nl"] Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.603428 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-x2p5k" Jan 29 12:18:11 crc kubenswrapper[4660]: I0129 12:18:11.762193 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-mpn6c" Jan 29 12:18:13 crc kubenswrapper[4660]: I0129 12:18:13.476594 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" path="/var/lib/kubelet/pods/534c1dd0-efe9-4f5e-8132-a907f7b2bd0b/volumes" Jan 29 12:18:19 crc kubenswrapper[4660]: I0129 12:18:19.167799 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:19 crc kubenswrapper[4660]: I0129 12:18:19.168137 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:19 crc kubenswrapper[4660]: I0129 12:18:19.206051 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:19 crc kubenswrapper[4660]: I0129 12:18:19.649667 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jh4bg" Jan 29 12:18:21 crc kubenswrapper[4660]: I0129 12:18:21.611718 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z2bhd" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.172447 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb"] Jan 29 12:18:26 crc kubenswrapper[4660]: E0129 12:18:26.173031 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" containerName="registry-server" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.173046 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" containerName="registry-server" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.173176 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="534c1dd0-efe9-4f5e-8132-a907f7b2bd0b" containerName="registry-server" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.174078 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.178375 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ffrdb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.240141 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb"] Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.292286 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.292354 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.292779 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjf9\" (UniqueName: \"kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.292999 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.293047 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.394113 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.394398 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.394498 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjf9\" (UniqueName: \"kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.394684 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.394828 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.417034 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjf9\" (UniqueName: \"kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.494111 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:26 crc kubenswrapper[4660]: I0129 12:18:26.898078 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb"] Jan 29 12:18:26 crc kubenswrapper[4660]: W0129 12:18:26.899882 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b73c58d_8e49_4f14_98a0_a67114e62ff3.slice/crio-c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87 WatchSource:0}: Error finding container c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87: Status 404 returned error can't find the container with id c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87 Jan 29 12:18:27 crc kubenswrapper[4660]: I0129 12:18:27.683984 4660 generic.go:334] "Generic (PLEG): container finished" podID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerID="0d93f861a5c838f906337d2d43dce77b01d7cf1c3e10be87607c247bdaccbf0e" exitCode=0 Jan 29 12:18:27 crc kubenswrapper[4660]: I0129 12:18:27.684098 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" event={"ID":"8b73c58d-8e49-4f14-98a0-a67114e62ff3","Type":"ContainerDied","Data":"0d93f861a5c838f906337d2d43dce77b01d7cf1c3e10be87607c247bdaccbf0e"} Jan 29 12:18:27 crc kubenswrapper[4660]: I0129 12:18:27.684378 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" event={"ID":"8b73c58d-8e49-4f14-98a0-a67114e62ff3","Type":"ContainerStarted","Data":"c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87"} Jan 29 12:18:28 crc kubenswrapper[4660]: I0129 12:18:28.693715 4660 generic.go:334] "Generic (PLEG): container finished" podID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerID="cd38e6a655d133604ce2b52e3396b3008d7717324c6160387876dcd5c265e0a1" exitCode=0 Jan 29 12:18:28 crc kubenswrapper[4660]: I0129 12:18:28.693774 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" event={"ID":"8b73c58d-8e49-4f14-98a0-a67114e62ff3","Type":"ContainerDied","Data":"cd38e6a655d133604ce2b52e3396b3008d7717324c6160387876dcd5c265e0a1"} Jan 29 12:18:29 crc kubenswrapper[4660]: I0129 12:18:29.704035 4660 generic.go:334] "Generic (PLEG): container finished" podID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerID="4f35eb5c292c38ddf01b1da7692c6f241c10f90f3afa878e4af22255162ffa46" exitCode=0 Jan 29 12:18:29 crc kubenswrapper[4660]: I0129 12:18:29.704376 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" event={"ID":"8b73c58d-8e49-4f14-98a0-a67114e62ff3","Type":"ContainerDied","Data":"4f35eb5c292c38ddf01b1da7692c6f241c10f90f3afa878e4af22255162ffa46"} Jan 29 12:18:30 crc kubenswrapper[4660]: I0129 12:18:30.924237 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.084574 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util\") pod \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.084949 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle\") pod \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.085019 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kjf9\" (UniqueName: \"kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9\") pod \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\" (UID: \"8b73c58d-8e49-4f14-98a0-a67114e62ff3\") " Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.085797 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle" (OuterVolumeSpecName: "bundle") pod "8b73c58d-8e49-4f14-98a0-a67114e62ff3" (UID: "8b73c58d-8e49-4f14-98a0-a67114e62ff3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.095386 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9" (OuterVolumeSpecName: "kube-api-access-4kjf9") pod "8b73c58d-8e49-4f14-98a0-a67114e62ff3" (UID: "8b73c58d-8e49-4f14-98a0-a67114e62ff3"). InnerVolumeSpecName "kube-api-access-4kjf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.099132 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util" (OuterVolumeSpecName: "util") pod "8b73c58d-8e49-4f14-98a0-a67114e62ff3" (UID: "8b73c58d-8e49-4f14-98a0-a67114e62ff3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.187094 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kjf9\" (UniqueName: \"kubernetes.io/projected/8b73c58d-8e49-4f14-98a0-a67114e62ff3-kube-api-access-4kjf9\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.187132 4660 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.187143 4660 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8b73c58d-8e49-4f14-98a0-a67114e62ff3-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.719183 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" event={"ID":"8b73c58d-8e49-4f14-98a0-a67114e62ff3","Type":"ContainerDied","Data":"c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87"} Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.719240 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c22027180d850fa9745ed07d3b34372cfbe0ce77c0ed175f297ca5dd105e1f87" Jan 29 12:18:31 crc kubenswrapper[4660]: I0129 12:18:31.719326 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.226437 4660 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758137 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq"] Jan 29 12:18:38 crc kubenswrapper[4660]: E0129 12:18:38.758392 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="util" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758408 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="util" Jan 29 12:18:38 crc kubenswrapper[4660]: E0129 12:18:38.758427 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="extract" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758433 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="extract" Jan 29 12:18:38 crc kubenswrapper[4660]: E0129 12:18:38.758445 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="pull" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758450 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="pull" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758548 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b73c58d-8e49-4f14-98a0-a67114e62ff3" containerName="extract" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.758999 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.761106 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-9gqxt" Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.796429 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq"] Jan 29 12:18:38 crc kubenswrapper[4660]: I0129 12:18:38.918136 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgkm\" (UniqueName: \"kubernetes.io/projected/8d8f5a32-f4d8-409a-9daa-99522117fad6-kube-api-access-rbgkm\") pod \"openstack-operator-controller-init-59c8666fb5-rrkpq\" (UID: \"8d8f5a32-f4d8-409a-9daa-99522117fad6\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:39 crc kubenswrapper[4660]: I0129 12:18:39.019451 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgkm\" (UniqueName: \"kubernetes.io/projected/8d8f5a32-f4d8-409a-9daa-99522117fad6-kube-api-access-rbgkm\") pod \"openstack-operator-controller-init-59c8666fb5-rrkpq\" (UID: \"8d8f5a32-f4d8-409a-9daa-99522117fad6\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:39 crc kubenswrapper[4660]: I0129 12:18:39.040142 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgkm\" (UniqueName: \"kubernetes.io/projected/8d8f5a32-f4d8-409a-9daa-99522117fad6-kube-api-access-rbgkm\") pod \"openstack-operator-controller-init-59c8666fb5-rrkpq\" (UID: \"8d8f5a32-f4d8-409a-9daa-99522117fad6\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:39 crc kubenswrapper[4660]: I0129 12:18:39.081132 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:39 crc kubenswrapper[4660]: I0129 12:18:39.573189 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq"] Jan 29 12:18:39 crc kubenswrapper[4660]: I0129 12:18:39.761984 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" event={"ID":"8d8f5a32-f4d8-409a-9daa-99522117fad6","Type":"ContainerStarted","Data":"25b4cc4c59ac8e22548f50a42f3034078be2350d2af48d1c65ae3c2508936547"} Jan 29 12:18:46 crc kubenswrapper[4660]: I0129 12:18:46.830239 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" event={"ID":"8d8f5a32-f4d8-409a-9daa-99522117fad6","Type":"ContainerStarted","Data":"9c72c356bc08bd118c9cc2046453a59a4730462731332db27ed0b8bbcc7b5bba"} Jan 29 12:18:46 crc kubenswrapper[4660]: I0129 12:18:46.830796 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:18:46 crc kubenswrapper[4660]: I0129 12:18:46.856666 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" podStartSLOduration=2.7242447910000003 podStartE2EDuration="8.856652285s" podCreationTimestamp="2026-01-29 12:18:38 +0000 UTC" firstStartedPulling="2026-01-29 12:18:39.567436062 +0000 UTC m=+756.790378194" lastFinishedPulling="2026-01-29 12:18:45.699843556 +0000 UTC m=+762.922785688" observedRunningTime="2026-01-29 12:18:46.856087499 +0000 UTC m=+764.079029651" watchObservedRunningTime="2026-01-29 12:18:46.856652285 +0000 UTC m=+764.079594417" Jan 29 12:18:56 crc kubenswrapper[4660]: I0129 12:18:56.268939 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:18:56 crc kubenswrapper[4660]: I0129 12:18:56.269779 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:18:59 crc kubenswrapper[4660]: I0129 12:18:59.085384 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rrkpq" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.409558 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.410922 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.415057 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n"] Jan 29 12:19:25 crc kubenswrapper[4660]: W0129 12:19:25.416271 4660 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5tj4q": failed to list *v1.Secret: secrets "cinder-operator-controller-manager-dockercfg-5tj4q" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 29 12:19:25 crc kubenswrapper[4660]: E0129 12:19:25.416311 4660 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-5tj4q\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-operator-controller-manager-dockercfg-5tj4q\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.416734 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.430284 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.433406 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dk67g" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.434191 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dws4q\" (UniqueName: \"kubernetes.io/projected/65751935-41e4-46ae-9cc8-c4e5d4193425-kube-api-access-dws4q\") pod \"cinder-operator-controller-manager-7595cf584-vhg9t\" (UID: \"65751935-41e4-46ae-9cc8-c4e5d4193425\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.434207 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.434246 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjhp\" (UniqueName: \"kubernetes.io/projected/531184c9-ac70-494d-9efd-19d8a9022f32-kube-api-access-vwjhp\") pod \"barbican-operator-controller-manager-657667746d-nlj9n\" (UID: \"531184c9-ac70-494d-9efd-19d8a9022f32\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.435073 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.439824 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-949hb" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.459016 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.459849 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.461989 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9kwzd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.477629 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.496270 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.524125 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.525073 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.536240 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjhp\" (UniqueName: \"kubernetes.io/projected/531184c9-ac70-494d-9efd-19d8a9022f32-kube-api-access-vwjhp\") pod \"barbican-operator-controller-manager-657667746d-nlj9n\" (UID: \"531184c9-ac70-494d-9efd-19d8a9022f32\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.536399 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dws4q\" (UniqueName: \"kubernetes.io/projected/65751935-41e4-46ae-9cc8-c4e5d4193425-kube-api-access-dws4q\") pod \"cinder-operator-controller-manager-7595cf584-vhg9t\" (UID: \"65751935-41e4-46ae-9cc8-c4e5d4193425\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.536457 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27flc\" (UniqueName: \"kubernetes.io/projected/6cb62294-b79d-4b84-b197-54a4ab0eeb50-kube-api-access-27flc\") pod \"designate-operator-controller-manager-55d5d5f8ff-jcsnd\" (UID: \"6cb62294-b79d-4b84-b197-54a4ab0eeb50\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.542315 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.547279 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-26tbf" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.548227 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.548281 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.551412 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-twnwb" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.556431 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.578418 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.582457 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dws4q\" (UniqueName: \"kubernetes.io/projected/65751935-41e4-46ae-9cc8-c4e5d4193425-kube-api-access-dws4q\") pod \"cinder-operator-controller-manager-7595cf584-vhg9t\" (UID: \"65751935-41e4-46ae-9cc8-c4e5d4193425\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.589821 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjhp\" (UniqueName: \"kubernetes.io/projected/531184c9-ac70-494d-9efd-19d8a9022f32-kube-api-access-vwjhp\") pod \"barbican-operator-controller-manager-657667746d-nlj9n\" (UID: \"531184c9-ac70-494d-9efd-19d8a9022f32\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.599478 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.600406 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.606304 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kpxbz" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.620605 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jstjj"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.622423 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.624666 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p2ntg" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.625216 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.642880 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshx5\" (UniqueName: \"kubernetes.io/projected/693c5d51-c352-44fa-bbe8-8cd0ca86b80b-kube-api-access-sshx5\") pod \"horizon-operator-controller-manager-5fb775575f-xnhss\" (UID: \"693c5d51-c352-44fa-bbe8-8cd0ca86b80b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.642949 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrrc\" (UniqueName: \"kubernetes.io/projected/2604f568-e449-450f-8c55-ab4d25510d85-kube-api-access-2qrrc\") pod \"glance-operator-controller-manager-6db5dbd896-p8t4z\" (UID: \"2604f568-e449-450f-8c55-ab4d25510d85\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.642981 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27flc\" (UniqueName: \"kubernetes.io/projected/6cb62294-b79d-4b84-b197-54a4ab0eeb50-kube-api-access-27flc\") pod \"designate-operator-controller-manager-55d5d5f8ff-jcsnd\" (UID: \"6cb62294-b79d-4b84-b197-54a4ab0eeb50\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.645318 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.662745 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jstjj"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.689953 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.707833 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.707930 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.712591 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gknj7" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.720756 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27flc\" (UniqueName: \"kubernetes.io/projected/6cb62294-b79d-4b84-b197-54a4ab0eeb50-kube-api-access-27flc\") pod \"designate-operator-controller-manager-55d5d5f8ff-jcsnd\" (UID: \"6cb62294-b79d-4b84-b197-54a4ab0eeb50\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.740305 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.745233 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.746163 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747006 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6x8x\" (UniqueName: \"kubernetes.io/projected/b5ec2d08-e2cd-4103-bbab-63de4ecc5902-kube-api-access-l6x8x\") pod \"keystone-operator-controller-manager-77bb7ffb8c-c2w2h\" (UID: \"b5ec2d08-e2cd-4103-bbab-63de4ecc5902\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747053 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4j5\" (UniqueName: \"kubernetes.io/projected/6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d-kube-api-access-zz4j5\") pod \"heat-operator-controller-manager-5499bccc75-2g5t6\" (UID: \"6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747261 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshx5\" (UniqueName: \"kubernetes.io/projected/693c5d51-c352-44fa-bbe8-8cd0ca86b80b-kube-api-access-sshx5\") pod \"horizon-operator-controller-manager-5fb775575f-xnhss\" (UID: \"693c5d51-c352-44fa-bbe8-8cd0ca86b80b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747527 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrrc\" (UniqueName: \"kubernetes.io/projected/2604f568-e449-450f-8c55-ab4d25510d85-kube-api-access-2qrrc\") pod \"glance-operator-controller-manager-6db5dbd896-p8t4z\" (UID: \"2604f568-e449-450f-8c55-ab4d25510d85\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747560 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wqgt\" (UniqueName: \"kubernetes.io/projected/379c54b4-ce54-4a69-8c0e-722fa84ed09f-kube-api-access-4wqgt\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747600 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rcct\" (UniqueName: \"kubernetes.io/projected/0ea29cd5-b44c-4d84-ad66-3360df645d54-kube-api-access-4rcct\") pod \"ironic-operator-controller-manager-56cb7c4b4c-n5x67\" (UID: \"0ea29cd5-b44c-4d84-ad66-3360df645d54\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747625 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.747650 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4hg\" (UniqueName: \"kubernetes.io/projected/fe8323d4-90f2-455d-8198-de7b1918f1ae-kube-api-access-kt4hg\") pod \"manila-operator-controller-manager-6475bdcbc4-hqvr9\" (UID: \"fe8323d4-90f2-455d-8198-de7b1918f1ae\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.755962 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mkhkd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.782518 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrrc\" (UniqueName: \"kubernetes.io/projected/2604f568-e449-450f-8c55-ab4d25510d85-kube-api-access-2qrrc\") pod \"glance-operator-controller-manager-6db5dbd896-p8t4z\" (UID: \"2604f568-e449-450f-8c55-ab4d25510d85\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.794684 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.794971 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.797174 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshx5\" (UniqueName: \"kubernetes.io/projected/693c5d51-c352-44fa-bbe8-8cd0ca86b80b-kube-api-access-sshx5\") pod \"horizon-operator-controller-manager-5fb775575f-xnhss\" (UID: \"693c5d51-c352-44fa-bbe8-8cd0ca86b80b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.803724 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.809028 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.809898 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.823558 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-llnn8" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.841315 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.850321 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856075 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6x8x\" (UniqueName: \"kubernetes.io/projected/b5ec2d08-e2cd-4103-bbab-63de4ecc5902-kube-api-access-l6x8x\") pod \"keystone-operator-controller-manager-77bb7ffb8c-c2w2h\" (UID: \"b5ec2d08-e2cd-4103-bbab-63de4ecc5902\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856133 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz4j5\" (UniqueName: \"kubernetes.io/projected/6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d-kube-api-access-zz4j5\") pod \"heat-operator-controller-manager-5499bccc75-2g5t6\" (UID: \"6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856185 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv84t\" (UniqueName: \"kubernetes.io/projected/a2bffb25-e078-4a03-9875-d8a154991b1e-kube-api-access-gv84t\") pod \"mariadb-operator-controller-manager-67bf948998-lbsb9\" (UID: \"a2bffb25-e078-4a03-9875-d8a154991b1e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856274 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wqgt\" (UniqueName: \"kubernetes.io/projected/379c54b4-ce54-4a69-8c0e-722fa84ed09f-kube-api-access-4wqgt\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856316 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rcct\" (UniqueName: \"kubernetes.io/projected/0ea29cd5-b44c-4d84-ad66-3360df645d54-kube-api-access-4rcct\") pod \"ironic-operator-controller-manager-56cb7c4b4c-n5x67\" (UID: \"0ea29cd5-b44c-4d84-ad66-3360df645d54\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856334 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.856350 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4hg\" (UniqueName: \"kubernetes.io/projected/fe8323d4-90f2-455d-8198-de7b1918f1ae-kube-api-access-kt4hg\") pod \"manila-operator-controller-manager-6475bdcbc4-hqvr9\" (UID: \"fe8323d4-90f2-455d-8198-de7b1918f1ae\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:19:25 crc kubenswrapper[4660]: E0129 12:19:25.857199 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:25 crc kubenswrapper[4660]: E0129 12:19:25.857247 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert podName:379c54b4-ce54-4a69-8c0e-722fa84ed09f nodeName:}" failed. No retries permitted until 2026-01-29 12:19:26.3572325 +0000 UTC m=+803.580174632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert") pod "infra-operator-controller-manager-79955696d6-jstjj" (UID: "379c54b4-ce54-4a69-8c0e-722fa84ed09f") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.861158 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.863009 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rfm74" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.863181 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.864002 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.870384 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lxj4c" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.893566 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.922252 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wqgt\" (UniqueName: \"kubernetes.io/projected/379c54b4-ce54-4a69-8c0e-722fa84ed09f-kube-api-access-4wqgt\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.930978 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6x8x\" (UniqueName: \"kubernetes.io/projected/b5ec2d08-e2cd-4103-bbab-63de4ecc5902-kube-api-access-l6x8x\") pod \"keystone-operator-controller-manager-77bb7ffb8c-c2w2h\" (UID: \"b5ec2d08-e2cd-4103-bbab-63de4ecc5902\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.931532 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4hg\" (UniqueName: \"kubernetes.io/projected/fe8323d4-90f2-455d-8198-de7b1918f1ae-kube-api-access-kt4hg\") pod \"manila-operator-controller-manager-6475bdcbc4-hqvr9\" (UID: \"fe8323d4-90f2-455d-8198-de7b1918f1ae\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.931997 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz4j5\" (UniqueName: \"kubernetes.io/projected/6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d-kube-api-access-zz4j5\") pod \"heat-operator-controller-manager-5499bccc75-2g5t6\" (UID: \"6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.931956 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rcct\" (UniqueName: \"kubernetes.io/projected/0ea29cd5-b44c-4d84-ad66-3360df645d54-kube-api-access-4rcct\") pod \"ironic-operator-controller-manager-56cb7c4b4c-n5x67\" (UID: \"0ea29cd5-b44c-4d84-ad66-3360df645d54\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.937788 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980256 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn"] Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980504 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p2zkz" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980675 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv84t\" (UniqueName: \"kubernetes.io/projected/a2bffb25-e078-4a03-9875-d8a154991b1e-kube-api-access-gv84t\") pod \"mariadb-operator-controller-manager-67bf948998-lbsb9\" (UID: \"a2bffb25-e078-4a03-9875-d8a154991b1e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980726 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l77h9\" (UniqueName: \"kubernetes.io/projected/6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00-kube-api-access-l77h9\") pod \"nova-operator-controller-manager-5ccd5b7f8f-ncfnn\" (UID: \"6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980750 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79mm\" (UniqueName: \"kubernetes.io/projected/a39fd043-26b6-4d3a-99ae-920c9b0664c0-kube-api-access-h79mm\") pod \"neutron-operator-controller-manager-55df775b69-4pcpn\" (UID: \"a39fd043-26b6-4d3a-99ae-920c9b0664c0\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.980806 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4tmw\" (UniqueName: \"kubernetes.io/projected/16fea8b6-0800-4b2f-abae-8ccbb97dee90-kube-api-access-p4tmw\") pod \"octavia-operator-controller-manager-6b855b4fc4-gmw7z\" (UID: \"16fea8b6-0800-4b2f-abae-8ccbb97dee90\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:19:25 crc kubenswrapper[4660]: I0129 12:19:25.997924 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.010064 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.017722 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv84t\" (UniqueName: \"kubernetes.io/projected/a2bffb25-e078-4a03-9875-d8a154991b1e-kube-api-access-gv84t\") pod \"mariadb-operator-controller-manager-67bf948998-lbsb9\" (UID: \"a2bffb25-e078-4a03-9875-d8a154991b1e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.024461 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.040742 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.042042 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.062898 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wnktg" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.066754 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.084236 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qwt8\" (UniqueName: \"kubernetes.io/projected/ae21e403-c97f-4a6b-bb36-867168ab3f60-kube-api-access-2qwt8\") pod \"ovn-operator-controller-manager-788c46999f-sgm4h\" (UID: \"ae21e403-c97f-4a6b-bb36-867168ab3f60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.084578 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4tmw\" (UniqueName: \"kubernetes.io/projected/16fea8b6-0800-4b2f-abae-8ccbb97dee90-kube-api-access-p4tmw\") pod \"octavia-operator-controller-manager-6b855b4fc4-gmw7z\" (UID: \"16fea8b6-0800-4b2f-abae-8ccbb97dee90\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.084796 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l77h9\" (UniqueName: \"kubernetes.io/projected/6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00-kube-api-access-l77h9\") pod \"nova-operator-controller-manager-5ccd5b7f8f-ncfnn\" (UID: \"6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.084876 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79mm\" (UniqueName: \"kubernetes.io/projected/a39fd043-26b6-4d3a-99ae-920c9b0664c0-kube-api-access-h79mm\") pod \"neutron-operator-controller-manager-55df775b69-4pcpn\" (UID: \"a39fd043-26b6-4d3a-99ae-920c9b0664c0\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.093069 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.095553 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.106028 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.107208 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.116907 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lh8mj" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.117117 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.118717 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jxbzc" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.119520 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.130981 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.137257 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79mm\" (UniqueName: \"kubernetes.io/projected/a39fd043-26b6-4d3a-99ae-920c9b0664c0-kube-api-access-h79mm\") pod \"neutron-operator-controller-manager-55df775b69-4pcpn\" (UID: \"a39fd043-26b6-4d3a-99ae-920c9b0664c0\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.138472 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l77h9\" (UniqueName: \"kubernetes.io/projected/6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00-kube-api-access-l77h9\") pod \"nova-operator-controller-manager-5ccd5b7f8f-ncfnn\" (UID: \"6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.158738 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.159585 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.167150 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2nh4r" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.167255 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.170762 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.186925 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.187475 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qwt8\" (UniqueName: \"kubernetes.io/projected/ae21e403-c97f-4a6b-bb36-867168ab3f60-kube-api-access-2qwt8\") pod \"ovn-operator-controller-manager-788c46999f-sgm4h\" (UID: \"ae21e403-c97f-4a6b-bb36-867168ab3f60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.187524 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.187561 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmbk\" (UniqueName: \"kubernetes.io/projected/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-kube-api-access-lxmbk\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.187611 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcrh\" (UniqueName: \"kubernetes.io/projected/935fa2bb-c3f3-47f1-a316-96b0df84aedc-kube-api-access-glcrh\") pod \"placement-operator-controller-manager-5b964cf4cd-dbf4c\" (UID: \"935fa2bb-c3f3-47f1-a316-96b0df84aedc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.189279 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4tmw\" (UniqueName: \"kubernetes.io/projected/16fea8b6-0800-4b2f-abae-8ccbb97dee90-kube-api-access-p4tmw\") pod \"octavia-operator-controller-manager-6b855b4fc4-gmw7z\" (UID: \"16fea8b6-0800-4b2f-abae-8ccbb97dee90\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.212503 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.217201 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qwt8\" (UniqueName: \"kubernetes.io/projected/ae21e403-c97f-4a6b-bb36-867168ab3f60-kube-api-access-2qwt8\") pod \"ovn-operator-controller-manager-788c46999f-sgm4h\" (UID: \"ae21e403-c97f-4a6b-bb36-867168ab3f60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.219116 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.229034 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.257215 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.262894 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.270282 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.270327 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.270364 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.270880 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.270926 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5" gracePeriod=600 Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.271342 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kqnx9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.286220 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.290361 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glcrh\" (UniqueName: \"kubernetes.io/projected/935fa2bb-c3f3-47f1-a316-96b0df84aedc-kube-api-access-glcrh\") pod \"placement-operator-controller-manager-5b964cf4cd-dbf4c\" (UID: \"935fa2bb-c3f3-47f1-a316-96b0df84aedc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.290465 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfkv\" (UniqueName: \"kubernetes.io/projected/6fc68dc9-a2bd-48a5-b31d-a29ca15489d8-kube-api-access-qdfkv\") pod \"telemetry-operator-controller-manager-c95fd9dc5-f4gll\" (UID: \"6fc68dc9-a2bd-48a5-b31d-a29ca15489d8\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.290497 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.290556 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmbk\" (UniqueName: \"kubernetes.io/projected/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-kube-api-access-lxmbk\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.290582 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jnh\" (UniqueName: \"kubernetes.io/projected/78da1eca-6a33-4825-a671-a348c42a5f3e-kube-api-access-c7jnh\") pod \"swift-operator-controller-manager-6f7455757b-g2z99\" (UID: \"78da1eca-6a33-4825-a671-a348c42a5f3e\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.291040 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.291104 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert podName:cb8b5e12-4b12-4d5c-b580-faa4aa0140fe nodeName:}" failed. No retries permitted until 2026-01-29 12:19:26.791081357 +0000 UTC m=+804.014023489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" (UID: "cb8b5e12-4b12-4d5c-b580-faa4aa0140fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.303827 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.304618 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.309279 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ktzwp" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.319066 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.320169 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcrh\" (UniqueName: \"kubernetes.io/projected/935fa2bb-c3f3-47f1-a316-96b0df84aedc-kube-api-access-glcrh\") pod \"placement-operator-controller-manager-5b964cf4cd-dbf4c\" (UID: \"935fa2bb-c3f3-47f1-a316-96b0df84aedc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.350103 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.350500 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.351463 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmbk\" (UniqueName: \"kubernetes.io/projected/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-kube-api-access-lxmbk\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.371276 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.403497 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.403668 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfkv\" (UniqueName: \"kubernetes.io/projected/6fc68dc9-a2bd-48a5-b31d-a29ca15489d8-kube-api-access-qdfkv\") pod \"telemetry-operator-controller-manager-c95fd9dc5-f4gll\" (UID: \"6fc68dc9-a2bd-48a5-b31d-a29ca15489d8\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.403818 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kcck\" (UniqueName: \"kubernetes.io/projected/4abdab31-b35e-415e-b9f3-d1f014624f1b-kube-api-access-4kcck\") pod \"test-operator-controller-manager-56f8bfcd9f-4vfnd\" (UID: \"4abdab31-b35e-415e-b9f3-d1f014624f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.403906 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jnh\" (UniqueName: \"kubernetes.io/projected/78da1eca-6a33-4825-a671-a348c42a5f3e-kube-api-access-c7jnh\") pod \"swift-operator-controller-manager-6f7455757b-g2z99\" (UID: \"78da1eca-6a33-4825-a671-a348c42a5f3e\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.410084 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.410160 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert podName:379c54b4-ce54-4a69-8c0e-722fa84ed09f nodeName:}" failed. No retries permitted until 2026-01-29 12:19:27.410142676 +0000 UTC m=+804.633084808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert") pod "infra-operator-controller-manager-79955696d6-jstjj" (UID: "379c54b4-ce54-4a69-8c0e-722fa84ed09f") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.410633 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.418416 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.421140 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.429246 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfkv\" (UniqueName: \"kubernetes.io/projected/6fc68dc9-a2bd-48a5-b31d-a29ca15489d8-kube-api-access-qdfkv\") pod \"telemetry-operator-controller-manager-c95fd9dc5-f4gll\" (UID: \"6fc68dc9-a2bd-48a5-b31d-a29ca15489d8\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.432105 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sj8j4" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.455918 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.506300 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kcck\" (UniqueName: \"kubernetes.io/projected/4abdab31-b35e-415e-b9f3-d1f014624f1b-kube-api-access-4kcck\") pod \"test-operator-controller-manager-56f8bfcd9f-4vfnd\" (UID: \"4abdab31-b35e-415e-b9f3-d1f014624f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.506361 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6bq\" (UniqueName: \"kubernetes.io/projected/b5d2675b-f392-41fc-8d46-2f7c40e7d69d-kube-api-access-qf6bq\") pod \"watcher-operator-controller-manager-56b5dc77fd-zbpsq\" (UID: \"b5d2675b-f392-41fc-8d46-2f7c40e7d69d\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.511769 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.528244 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.529113 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.534024 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.567840 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.568651 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.584702 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v"] Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.607446 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.607708 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf6bq\" (UniqueName: \"kubernetes.io/projected/b5d2675b-f392-41fc-8d46-2f7c40e7d69d-kube-api-access-qf6bq\") pod \"watcher-operator-controller-manager-56b5dc77fd-zbpsq\" (UID: \"b5d2675b-f392-41fc-8d46-2f7c40e7d69d\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.607744 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd9g7\" (UniqueName: \"kubernetes.io/projected/c301e3ae-90ee-4c00-86be-1e7990da739c-kube-api-access-cd9g7\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.607818 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.607844 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5rh\" (UniqueName: \"kubernetes.io/projected/339f7c0f-fb9f-4ce8-a2be-eb94620e67e8-kube-api-access-pj5rh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w585v\" (UID: \"339f7c0f-fb9f-4ce8-a2be-eb94620e67e8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.667914 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.716571 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.716620 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj5rh\" (UniqueName: \"kubernetes.io/projected/339f7c0f-fb9f-4ce8-a2be-eb94620e67e8-kube-api-access-pj5rh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w585v\" (UID: \"339f7c0f-fb9f-4ce8-a2be-eb94620e67e8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.716673 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.716726 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd9g7\" (UniqueName: \"kubernetes.io/projected/c301e3ae-90ee-4c00-86be-1e7990da739c-kube-api-access-cd9g7\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.740793 4660 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.740969 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:19:26 crc kubenswrapper[4660]: I0129 12:19:26.820647 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.821306 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:26 crc kubenswrapper[4660]: E0129 12:19:26.821400 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert podName:cb8b5e12-4b12-4d5c-b580-faa4aa0140fe nodeName:}" failed. No retries permitted until 2026-01-29 12:19:27.821374695 +0000 UTC m=+805.044316827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" (UID: "cb8b5e12-4b12-4d5c-b580-faa4aa0140fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.165331 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.165877 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.168774 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.168858 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:27.668835731 +0000 UTC m=+804.891777863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.168885 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.168913 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:27.668904723 +0000 UTC m=+804.891846855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.169592 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5tj4q" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.169764 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nwgtv" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.169826 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7sv4z" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.175110 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jnh\" (UniqueName: \"kubernetes.io/projected/78da1eca-6a33-4825-a671-a348c42a5f3e-kube-api-access-c7jnh\") pod \"swift-operator-controller-manager-6f7455757b-g2z99\" (UID: \"78da1eca-6a33-4825-a671-a348c42a5f3e\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.207208 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd9g7\" (UniqueName: \"kubernetes.io/projected/c301e3ae-90ee-4c00-86be-1e7990da739c-kube-api-access-cd9g7\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.209268 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj5rh\" (UniqueName: \"kubernetes.io/projected/339f7c0f-fb9f-4ce8-a2be-eb94620e67e8-kube-api-access-pj5rh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w585v\" (UID: \"339f7c0f-fb9f-4ce8-a2be-eb94620e67e8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.223402 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf6bq\" (UniqueName: \"kubernetes.io/projected/b5d2675b-f392-41fc-8d46-2f7c40e7d69d-kube-api-access-qf6bq\") pod \"watcher-operator-controller-manager-56b5dc77fd-zbpsq\" (UID: \"b5d2675b-f392-41fc-8d46-2f7c40e7d69d\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.239379 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.245008 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kcck\" (UniqueName: \"kubernetes.io/projected/4abdab31-b35e-415e-b9f3-d1f014624f1b-kube-api-access-4kcck\") pod \"test-operator-controller-manager-56f8bfcd9f-4vfnd\" (UID: \"4abdab31-b35e-415e-b9f3-d1f014624f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.294066 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.303169 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.324953 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.353013 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.386349 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.404496 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.417919 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.438046 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.449198 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.452405 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.452625 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert podName:379c54b4-ce54-4a69-8c0e-722fa84ed09f nodeName:}" failed. No retries permitted until 2026-01-29 12:19:29.452603767 +0000 UTC m=+806.675545899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert") pod "infra-operator-controller-manager-79955696d6-jstjj" (UID: "379c54b4-ce54-4a69-8c0e-722fa84ed09f") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.453638 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.519638 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.519700 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.544964 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6"] Jan 29 12:19:27 crc kubenswrapper[4660]: W0129 12:19:27.669510 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ea29cd5_b44c_4d84_ad66_3360df645d54.slice/crio-8a8e4c78ff152f283deb73b1054828b8669138fcc2328fd122a92db37d0f968b WatchSource:0}: Error finding container 8a8e4c78ff152f283deb73b1054828b8669138fcc2328fd122a92db37d0f968b: Status 404 returned error can't find the container with id 8a8e4c78ff152f283deb73b1054828b8669138fcc2328fd122a92db37d0f968b Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.694406 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.784435 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.784492 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.784649 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.784723 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:28.784704947 +0000 UTC m=+806.007647079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.784783 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.784811 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:28.784802369 +0000 UTC m=+806.007744501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.784830 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.798798 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.827982 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn"] Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.835328 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h"] Jan 29 12:19:27 crc kubenswrapper[4660]: W0129 12:19:27.842139 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fea8b6_0800_4b2f_abae_8ccbb97dee90.slice/crio-520c8a189f4b5267aeb204d6d72b08ec6b4699616f02904d5ee6363863cedf46 WatchSource:0}: Error finding container 520c8a189f4b5267aeb204d6d72b08ec6b4699616f02904d5ee6363863cedf46: Status 404 returned error can't find the container with id 520c8a189f4b5267aeb204d6d72b08ec6b4699616f02904d5ee6363863cedf46 Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.857022 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll"] Jan 29 12:19:27 crc kubenswrapper[4660]: W0129 12:19:27.863738 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae21e403_c97f_4a6b_bb36_867168ab3f60.slice/crio-0f9ca916b5ec92c3835f76c991df16aa6f2efcf91b66fed79646c2e6d12e6953 WatchSource:0}: Error finding container 0f9ca916b5ec92c3835f76c991df16aa6f2efcf91b66fed79646c2e6d12e6953: Status 404 returned error can't find the container with id 0f9ca916b5ec92c3835f76c991df16aa6f2efcf91b66fed79646c2e6d12e6953 Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.865625 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c"] Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.883775 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dws4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7595cf584-vhg9t_openstack-operators(65751935-41e4-46ae-9cc8-c4e5d4193425): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.885372 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podUID="65751935-41e4-46ae-9cc8-c4e5d4193425" Jan 29 12:19:27 crc kubenswrapper[4660]: I0129 12:19:27.887622 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.888395 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.888470 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert podName:cb8b5e12-4b12-4d5c-b580-faa4aa0140fe nodeName:}" failed. No retries permitted until 2026-01-29 12:19:29.88845234 +0000 UTC m=+807.111394472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" (UID: "cb8b5e12-4b12-4d5c-b580-faa4aa0140fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.890326 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glcrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dbf4c_openstack-operators(935fa2bb-c3f3-47f1-a316-96b0df84aedc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.890463 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qwt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-sgm4h_openstack-operators(ae21e403-c97f-4a6b-bb36-867168ab3f60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.892314 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podUID="ae21e403-c97f-4a6b-bb36-867168ab3f60" Jan 29 12:19:27 crc kubenswrapper[4660]: E0129 12:19:27.892363 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podUID="935fa2bb-c3f3-47f1-a316-96b0df84aedc" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.059559 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd"] Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.093334 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" event={"ID":"6fc68dc9-a2bd-48a5-b31d-a29ca15489d8","Type":"ContainerStarted","Data":"cb92f83966631341abf524ab04149c0278a4123787a860b11e9305550100479e"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.094432 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" event={"ID":"6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d","Type":"ContainerStarted","Data":"763944463dad6e95f7fd8171dd8a646248bd017ce66f3407d69b174ae856886e"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.104892 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" event={"ID":"b5ec2d08-e2cd-4103-bbab-63de4ecc5902","Type":"ContainerStarted","Data":"40f7bd1b012522170cd0e1998b26b562051d5950f81d204e10b227ed72ba0cae"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.111340 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5" exitCode=0 Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.111388 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.111417 4660 scope.go:117] "RemoveContainer" containerID="a0685f6fda28c06cc5da879db051ed52b865f71c1639d6f19634190e4d0d6734" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.117440 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" event={"ID":"0ea29cd5-b44c-4d84-ad66-3360df645d54","Type":"ContainerStarted","Data":"8a8e4c78ff152f283deb73b1054828b8669138fcc2328fd122a92db37d0f968b"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.134650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" event={"ID":"6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00","Type":"ContainerStarted","Data":"c08408f97698a737926b23ae17fc90aba6cebffbf3209aca19478863ac33e85d"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.136126 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" event={"ID":"a39fd043-26b6-4d3a-99ae-920c9b0664c0","Type":"ContainerStarted","Data":"1b06b41a602089e501f8200d0b22e3129aef7232ef7426ca5cfc037c4cf8064b"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.137076 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" event={"ID":"ae21e403-c97f-4a6b-bb36-867168ab3f60","Type":"ContainerStarted","Data":"0f9ca916b5ec92c3835f76c991df16aa6f2efcf91b66fed79646c2e6d12e6953"} Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.147190 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podUID="ae21e403-c97f-4a6b-bb36-867168ab3f60" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.150918 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" event={"ID":"531184c9-ac70-494d-9efd-19d8a9022f32","Type":"ContainerStarted","Data":"5b28e6afe88e2afa153fe71d5446d82aa6e56eefbf8f3bb66f6bc79185d25144"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.153054 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" event={"ID":"693c5d51-c352-44fa-bbe8-8cd0ca86b80b","Type":"ContainerStarted","Data":"d96231d6bc0343e412bedfc673113a69d9629e8add5c78c36ef15f8ceefcb2da"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.154678 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" event={"ID":"6cb62294-b79d-4b84-b197-54a4ab0eeb50","Type":"ContainerStarted","Data":"b2d9f5714a981a215c0171c53c869b9aad2969f74dfb6888db0b8d551fbf6e17"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.163028 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" event={"ID":"fe8323d4-90f2-455d-8198-de7b1918f1ae","Type":"ContainerStarted","Data":"cce054b2524321ca3348ba89174e78e98713f9599859c8d71d4dfa4ebe25cf0f"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.168678 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" event={"ID":"4abdab31-b35e-415e-b9f3-d1f014624f1b","Type":"ContainerStarted","Data":"ad198560f0e60301ac521fc3d487eb8a67c9967d431a0d5165eb0cef0e52ec04"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.173750 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" event={"ID":"a2bffb25-e078-4a03-9875-d8a154991b1e","Type":"ContainerStarted","Data":"97143885e6ab594030d9de309c94d3490a2e8ab47fc58325d29d5867a3b01e41"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.177292 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" event={"ID":"2604f568-e449-450f-8c55-ab4d25510d85","Type":"ContainerStarted","Data":"ca874a2abe9571597374e5df7b808b5c4169f48ccc1b2135dcd0d1789783df6c"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.178654 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" event={"ID":"16fea8b6-0800-4b2f-abae-8ccbb97dee90","Type":"ContainerStarted","Data":"520c8a189f4b5267aeb204d6d72b08ec6b4699616f02904d5ee6363863cedf46"} Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.181606 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" event={"ID":"65751935-41e4-46ae-9cc8-c4e5d4193425","Type":"ContainerStarted","Data":"b7f984046f26ed47e2705c81d867c54f109e6abefe4de2bb2c74174b1ba35b5d"} Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.184263 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podUID="65751935-41e4-46ae-9cc8-c4e5d4193425" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.185955 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" event={"ID":"935fa2bb-c3f3-47f1-a316-96b0df84aedc","Type":"ContainerStarted","Data":"1a0a056e42c3bd713ae698826fc188f0fc68b78ed7fe90e0334e1c7aa7930caa"} Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.187950 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podUID="935fa2bb-c3f3-47f1-a316-96b0df84aedc" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.334678 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v"] Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.437417 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99"] Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.449024 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7jnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-g2z99_openstack-operators(78da1eca-6a33-4825-a671-a348c42a5f3e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.456223 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq"] Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.456724 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podUID="78da1eca-6a33-4825-a671-a348c42a5f3e" Jan 29 12:19:28 crc kubenswrapper[4660]: W0129 12:19:28.459295 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d2675b_f392_41fc_8d46_2f7c40e7d69d.slice/crio-a6bb5e1a5befc973846ac54a0926daa06dd916912a12395d382a15447ed4e645 WatchSource:0}: Error finding container a6bb5e1a5befc973846ac54a0926daa06dd916912a12395d382a15447ed4e645: Status 404 returned error can't find the container with id a6bb5e1a5befc973846ac54a0926daa06dd916912a12395d382a15447ed4e645 Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.810601 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:28 crc kubenswrapper[4660]: I0129 12:19:28.811196 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.810806 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.811505 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:30.811482421 +0000 UTC m=+808.034424553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.811251 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:28 crc kubenswrapper[4660]: E0129 12:19:28.811732 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:30.811682476 +0000 UTC m=+808.034624608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.193937 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" event={"ID":"339f7c0f-fb9f-4ce8-a2be-eb94620e67e8","Type":"ContainerStarted","Data":"9905c1128b48c25bbb5e3dbfdb32baf96a52d42063d04ba1427c89efbb635bc7"} Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.194941 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" event={"ID":"78da1eca-6a33-4825-a671-a348c42a5f3e","Type":"ContainerStarted","Data":"44fafc2fa104539efa90079146d8ff2999d97d94df5e60223f11de1228d82c75"} Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.197421 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" event={"ID":"b5d2675b-f392-41fc-8d46-2f7c40e7d69d","Type":"ContainerStarted","Data":"a6bb5e1a5befc973846ac54a0926daa06dd916912a12395d382a15447ed4e645"} Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.199319 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podUID="78da1eca-6a33-4825-a671-a348c42a5f3e" Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.200566 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6"} Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.203408 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podUID="ae21e403-c97f-4a6b-bb36-867168ab3f60" Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.203447 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podUID="935fa2bb-c3f3-47f1-a316-96b0df84aedc" Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.203600 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podUID="65751935-41e4-46ae-9cc8-c4e5d4193425" Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.539858 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.540125 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.540212 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert podName:379c54b4-ce54-4a69-8c0e-722fa84ed09f nodeName:}" failed. No retries permitted until 2026-01-29 12:19:33.540188682 +0000 UTC m=+810.763130894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert") pod "infra-operator-controller-manager-79955696d6-jstjj" (UID: "379c54b4-ce54-4a69-8c0e-722fa84ed09f") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:29 crc kubenswrapper[4660]: I0129 12:19:29.945523 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.945717 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:29 crc kubenswrapper[4660]: E0129 12:19:29.945932 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert podName:cb8b5e12-4b12-4d5c-b580-faa4aa0140fe nodeName:}" failed. No retries permitted until 2026-01-29 12:19:33.945908868 +0000 UTC m=+811.168851080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" (UID: "cb8b5e12-4b12-4d5c-b580-faa4aa0140fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:30 crc kubenswrapper[4660]: E0129 12:19:30.215317 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podUID="78da1eca-6a33-4825-a671-a348c42a5f3e" Jan 29 12:19:30 crc kubenswrapper[4660]: I0129 12:19:30.860897 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:30 crc kubenswrapper[4660]: I0129 12:19:30.860988 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:30 crc kubenswrapper[4660]: E0129 12:19:30.861135 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:30 crc kubenswrapper[4660]: E0129 12:19:30.861188 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.861172974 +0000 UTC m=+812.084115106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:30 crc kubenswrapper[4660]: E0129 12:19:30.861598 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:30 crc kubenswrapper[4660]: E0129 12:19:30.861632 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.861617086 +0000 UTC m=+812.084559218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:33 crc kubenswrapper[4660]: I0129 12:19:33.629861 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:33 crc kubenswrapper[4660]: E0129 12:19:33.630623 4660 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:33 crc kubenswrapper[4660]: E0129 12:19:33.630676 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert podName:379c54b4-ce54-4a69-8c0e-722fa84ed09f nodeName:}" failed. No retries permitted until 2026-01-29 12:19:41.63066216 +0000 UTC m=+818.853604292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert") pod "infra-operator-controller-manager-79955696d6-jstjj" (UID: "379c54b4-ce54-4a69-8c0e-722fa84ed09f") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: I0129 12:19:34.038418 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.038953 4660 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.039016 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert podName:cb8b5e12-4b12-4d5c-b580-faa4aa0140fe nodeName:}" failed. No retries permitted until 2026-01-29 12:19:42.038996988 +0000 UTC m=+819.261939120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" (UID: "cb8b5e12-4b12-4d5c-b580-faa4aa0140fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: I0129 12:19:34.955155 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.955387 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: I0129 12:19:34.955571 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.955652 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:42.955626112 +0000 UTC m=+820.178568324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.955657 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4660]: E0129 12:19:34.955735 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:42.955716734 +0000 UTC m=+820.178658936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:41 crc kubenswrapper[4660]: I0129 12:19:41.689791 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:41 crc kubenswrapper[4660]: I0129 12:19:41.695214 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/379c54b4-ce54-4a69-8c0e-722fa84ed09f-cert\") pod \"infra-operator-controller-manager-79955696d6-jstjj\" (UID: \"379c54b4-ce54-4a69-8c0e-722fa84ed09f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:41 crc kubenswrapper[4660]: I0129 12:19:41.857364 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:19:42 crc kubenswrapper[4660]: I0129 12:19:42.095392 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:42 crc kubenswrapper[4660]: I0129 12:19:42.100122 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cb8b5e12-4b12-4d5c-b580-faa4aa0140fe-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9\" (UID: \"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:42 crc kubenswrapper[4660]: I0129 12:19:42.369531 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:19:43 crc kubenswrapper[4660]: I0129 12:19:43.012267 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:43 crc kubenswrapper[4660]: I0129 12:19:43.012350 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.012514 4660 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.012571 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:59.012555049 +0000 UTC m=+836.235497181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "metrics-server-cert" not found Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.012965 4660 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.012998 4660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs podName:c301e3ae-90ee-4c00-86be-1e7990da739c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:59.012987681 +0000 UTC m=+836.235929813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-v6vw6" (UID: "c301e3ae-90ee-4c00-86be-1e7990da739c") : secret "webhook-server-cert" not found Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.667152 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:911dc92aad81afb21c6c776e39803ea6cc0bf7d0601dfe006c3983102b9b9542" Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.667624 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:911dc92aad81afb21c6c776e39803ea6cc0bf7d0601dfe006c3983102b9b9542,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27flc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-55d5d5f8ff-jcsnd_openstack-operators(6cb62294-b79d-4b84-b197-54a4ab0eeb50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:43 crc kubenswrapper[4660]: E0129 12:19:43.668994 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" podUID="6cb62294-b79d-4b84-b197-54a4ab0eeb50" Jan 29 12:19:44 crc kubenswrapper[4660]: E0129 12:19:44.319130 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:911dc92aad81afb21c6c776e39803ea6cc0bf7d0601dfe006c3983102b9b9542\\\"\"" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" podUID="6cb62294-b79d-4b84-b197-54a4ab0eeb50" Jan 29 12:19:48 crc kubenswrapper[4660]: E0129 12:19:48.895541 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515" Jan 29 12:19:48 crc kubenswrapper[4660]: E0129 12:19:48.896296 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4rcct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-56cb7c4b4c-n5x67_openstack-operators(0ea29cd5-b44c-4d84-ad66-3360df645d54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:48 crc kubenswrapper[4660]: E0129 12:19:48.897406 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" podUID="0ea29cd5-b44c-4d84-ad66-3360df645d54" Jan 29 12:19:49 crc kubenswrapper[4660]: E0129 12:19:49.349157 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" podUID="0ea29cd5-b44c-4d84-ad66-3360df645d54" Jan 29 12:19:49 crc kubenswrapper[4660]: E0129 12:19:49.709072 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 12:19:49 crc kubenswrapper[4660]: E0129 12:19:49.709278 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gv84t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-lbsb9_openstack-operators(a2bffb25-e078-4a03-9875-d8a154991b1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:49 crc kubenswrapper[4660]: E0129 12:19:49.710442 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" podUID="a2bffb25-e078-4a03-9875-d8a154991b1e" Jan 29 12:19:50 crc kubenswrapper[4660]: E0129 12:19:50.369870 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" podUID="a2bffb25-e078-4a03-9875-d8a154991b1e" Jan 29 12:19:50 crc kubenswrapper[4660]: E0129 12:19:50.922735 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:1a0e11bdf895d664e5f3d054f738f9c0c392c6c8133a00e8f746b8e060dde077" Jan 29 12:19:50 crc kubenswrapper[4660]: E0129 12:19:50.923340 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:1a0e11bdf895d664e5f3d054f738f9c0c392c6c8133a00e8f746b8e060dde077,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwjhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-657667746d-nlj9n_openstack-operators(531184c9-ac70-494d-9efd-19d8a9022f32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:50 crc kubenswrapper[4660]: E0129 12:19:50.924677 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" podUID="531184c9-ac70-494d-9efd-19d8a9022f32" Jan 29 12:19:51 crc kubenswrapper[4660]: E0129 12:19:51.373193 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:1a0e11bdf895d664e5f3d054f738f9c0c392c6c8133a00e8f746b8e060dde077\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" podUID="531184c9-ac70-494d-9efd-19d8a9022f32" Jan 29 12:19:51 crc kubenswrapper[4660]: E0129 12:19:51.580576 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:70d487d5bbaf7f1cf7a9fef3a8d7ff07648a5b457b33a03f90b0bf31c3393718" Jan 29 12:19:51 crc kubenswrapper[4660]: E0129 12:19:51.580805 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:70d487d5bbaf7f1cf7a9fef3a8d7ff07648a5b457b33a03f90b0bf31c3393718,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4tmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6b855b4fc4-gmw7z_openstack-operators(16fea8b6-0800-4b2f-abae-8ccbb97dee90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:51 crc kubenswrapper[4660]: E0129 12:19:51.582066 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" podUID="16fea8b6-0800-4b2f-abae-8ccbb97dee90" Jan 29 12:19:52 crc kubenswrapper[4660]: E0129 12:19:52.380847 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:70d487d5bbaf7f1cf7a9fef3a8d7ff07648a5b457b33a03f90b0bf31c3393718\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" podUID="16fea8b6-0800-4b2f-abae-8ccbb97dee90" Jan 29 12:19:53 crc kubenswrapper[4660]: E0129 12:19:53.370571 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 12:19:53 crc kubenswrapper[4660]: E0129 12:19:53.371407 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pj5rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w585v_openstack-operators(339f7c0f-fb9f-4ce8-a2be-eb94620e67e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:53 crc kubenswrapper[4660]: E0129 12:19:53.373049 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" podUID="339f7c0f-fb9f-4ce8-a2be-eb94620e67e8" Jan 29 12:19:53 crc kubenswrapper[4660]: E0129 12:19:53.389313 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" podUID="339f7c0f-fb9f-4ce8-a2be-eb94620e67e8" Jan 29 12:19:54 crc kubenswrapper[4660]: E0129 12:19:54.028148 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 12:19:54 crc kubenswrapper[4660]: E0129 12:19:54.028411 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sshx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-xnhss_openstack-operators(693c5d51-c352-44fa-bbe8-8cd0ca86b80b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:19:54 crc kubenswrapper[4660]: E0129 12:19:54.029578 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" podUID="693c5d51-c352-44fa-bbe8-8cd0ca86b80b" Jan 29 12:19:54 crc kubenswrapper[4660]: E0129 12:19:54.395184 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" podUID="693c5d51-c352-44fa-bbe8-8cd0ca86b80b" Jan 29 12:19:59 crc kubenswrapper[4660]: I0129 12:19:59.070622 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:59 crc kubenswrapper[4660]: I0129 12:19:59.071081 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:59 crc kubenswrapper[4660]: I0129 12:19:59.079451 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:59 crc kubenswrapper[4660]: I0129 12:19:59.083723 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c301e3ae-90ee-4c00-86be-1e7990da739c-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-v6vw6\" (UID: \"c301e3ae-90ee-4c00-86be-1e7990da739c\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:19:59 crc kubenswrapper[4660]: I0129 12:19:59.156018 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:20:02 crc kubenswrapper[4660]: E0129 12:20:02.831974 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b" Jan 29 12:20:02 crc kubenswrapper[4660]: E0129 12:20:02.832879 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdfkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c95fd9dc5-f4gll_openstack-operators(6fc68dc9-a2bd-48a5-b31d-a29ca15489d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:02 crc kubenswrapper[4660]: E0129 12:20:02.834264 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" podUID="6fc68dc9-a2bd-48a5-b31d-a29ca15489d8" Jan 29 12:20:03 crc kubenswrapper[4660]: E0129 12:20:03.482637 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" podUID="6fc68dc9-a2bd-48a5-b31d-a29ca15489d8" Jan 29 12:20:04 crc kubenswrapper[4660]: E0129 12:20:04.599561 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10" Jan 29 12:20:04 crc kubenswrapper[4660]: E0129 12:20:04.600085 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dws4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7595cf584-vhg9t_openstack-operators(65751935-41e4-46ae-9cc8-c4e5d4193425): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:04 crc kubenswrapper[4660]: E0129 12:20:04.601269 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podUID="65751935-41e4-46ae-9cc8-c4e5d4193425" Jan 29 12:20:05 crc kubenswrapper[4660]: E0129 12:20:05.835188 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534" Jan 29 12:20:05 crc kubenswrapper[4660]: E0129 12:20:05.836901 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6x8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-77bb7ffb8c-c2w2h_openstack-operators(b5ec2d08-e2cd-4103-bbab-63de4ecc5902): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:05 crc kubenswrapper[4660]: E0129 12:20:05.838289 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" podUID="b5ec2d08-e2cd-4103-bbab-63de4ecc5902" Jan 29 12:20:06 crc kubenswrapper[4660]: E0129 12:20:06.497360 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" podUID="b5ec2d08-e2cd-4103-bbab-63de4ecc5902" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.203085 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.203583 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7jnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-g2z99_openstack-operators(78da1eca-6a33-4825-a671-a348c42a5f3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.204756 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podUID="78da1eca-6a33-4825-a671-a348c42a5f3e" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.828275 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.828455 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qwt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-sgm4h_openstack-operators(ae21e403-c97f-4a6b-bb36-867168ab3f60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:07 crc kubenswrapper[4660]: E0129 12:20:07.833530 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podUID="ae21e403-c97f-4a6b-bb36-867168ab3f60" Jan 29 12:20:08 crc kubenswrapper[4660]: E0129 12:20:08.484166 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 12:20:08 crc kubenswrapper[4660]: E0129 12:20:08.484364 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glcrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-dbf4c_openstack-operators(935fa2bb-c3f3-47f1-a316-96b0df84aedc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:08 crc kubenswrapper[4660]: E0129 12:20:08.485534 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podUID="935fa2bb-c3f3-47f1-a316-96b0df84aedc" Jan 29 12:20:13 crc kubenswrapper[4660]: E0129 12:20:13.252920 4660 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc" Jan 29 12:20:13 crc kubenswrapper[4660]: E0129 12:20:13.253405 4660 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l77h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ccd5b7f8f-ncfnn_openstack-operators(6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:20:13 crc kubenswrapper[4660]: E0129 12:20:13.256205 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" podUID="6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00" Jan 29 12:20:13 crc kubenswrapper[4660]: I0129 12:20:13.549340 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6"] Jan 29 12:20:13 crc kubenswrapper[4660]: I0129 12:20:13.563726 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" event={"ID":"2604f568-e449-450f-8c55-ab4d25510d85","Type":"ContainerStarted","Data":"fca1eb9a7cc89482e3f905a5003d9a3e388465cbad303a010ce589570724e4c0"} Jan 29 12:20:13 crc kubenswrapper[4660]: I0129 12:20:13.563774 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:20:13 crc kubenswrapper[4660]: I0129 12:20:13.593828 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" podStartSLOduration=14.114483729 podStartE2EDuration="48.593808014s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.304877052 +0000 UTC m=+804.527819184" lastFinishedPulling="2026-01-29 12:20:01.784201327 +0000 UTC m=+839.007143469" observedRunningTime="2026-01-29 12:20:13.588053799 +0000 UTC m=+850.810995931" watchObservedRunningTime="2026-01-29 12:20:13.593808014 +0000 UTC m=+850.816750146" Jan 29 12:20:13 crc kubenswrapper[4660]: E0129 12:20:13.740092 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" podUID="6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00" Jan 29 12:20:13 crc kubenswrapper[4660]: I0129 12:20:13.968041 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9"] Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.004439 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-jstjj"] Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.569658 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" event={"ID":"4abdab31-b35e-415e-b9f3-d1f014624f1b","Type":"ContainerStarted","Data":"1c76973d61337ca4692e78460d06e56930d5f4c82617e7efb3797bbd49fcb831"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.570016 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.572040 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" event={"ID":"6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d","Type":"ContainerStarted","Data":"91d96c009e9c059893fd4a8ef584fcd38e6ee6ea943788773f79c93cd2c749b8"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.572774 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.573636 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" event={"ID":"0ea29cd5-b44c-4d84-ad66-3360df645d54","Type":"ContainerStarted","Data":"8c9dc62b64f1c31030f04406b833871207ece416f72c51cd033f04af86b5762d"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.573976 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.574590 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" event={"ID":"c301e3ae-90ee-4c00-86be-1e7990da739c","Type":"ContainerStarted","Data":"57db0a192d964b8c723624b0d7622c994d5ed3b2114b46a220cb475ff11e18ed"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.575959 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" event={"ID":"a39fd043-26b6-4d3a-99ae-920c9b0664c0","Type":"ContainerStarted","Data":"c28f0364508c5aec55c0000190a6c0f01c28b67577552b28d9994016eb30d95f"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.576065 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.577466 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" event={"ID":"fe8323d4-90f2-455d-8198-de7b1918f1ae","Type":"ContainerStarted","Data":"a63c96785276e3bc0624020f149020c68557a60bea00e8515a345dab50b34107"} Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.587991 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" podStartSLOduration=13.82492937 podStartE2EDuration="48.58797259s" podCreationTimestamp="2026-01-29 12:19:26 +0000 UTC" firstStartedPulling="2026-01-29 12:19:28.073251986 +0000 UTC m=+805.296194118" lastFinishedPulling="2026-01-29 12:20:02.836295176 +0000 UTC m=+840.059237338" observedRunningTime="2026-01-29 12:20:14.584559702 +0000 UTC m=+851.807501834" watchObservedRunningTime="2026-01-29 12:20:14.58797259 +0000 UTC m=+851.810914712" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.607074 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" podStartSLOduration=4.138293455 podStartE2EDuration="49.60705201s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.745398714 +0000 UTC m=+804.968340846" lastFinishedPulling="2026-01-29 12:20:13.214157269 +0000 UTC m=+850.437099401" observedRunningTime="2026-01-29 12:20:14.602852859 +0000 UTC m=+851.825794991" watchObservedRunningTime="2026-01-29 12:20:14.60705201 +0000 UTC m=+851.829994152" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.631590 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" podStartSLOduration=9.733765028 podStartE2EDuration="49.631574816s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.867035065 +0000 UTC m=+805.089977197" lastFinishedPulling="2026-01-29 12:20:07.764844853 +0000 UTC m=+844.987786985" observedRunningTime="2026-01-29 12:20:14.630212437 +0000 UTC m=+851.853154589" watchObservedRunningTime="2026-01-29 12:20:14.631574816 +0000 UTC m=+851.854516948" Jan 29 12:20:14 crc kubenswrapper[4660]: W0129 12:20:14.643802 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb8b5e12_4b12_4d5c_b580_faa4aa0140fe.slice/crio-9c71f2a8914a80fde0a5ec4c6af581d5281effea146552625c1521395b9e8558 WatchSource:0}: Error finding container 9c71f2a8914a80fde0a5ec4c6af581d5281effea146552625c1521395b9e8558: Status 404 returned error can't find the container with id 9c71f2a8914a80fde0a5ec4c6af581d5281effea146552625c1521395b9e8558 Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.699093 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" podStartSLOduration=10.089609704 podStartE2EDuration="49.6990752s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.543416421 +0000 UTC m=+804.766358553" lastFinishedPulling="2026-01-29 12:20:07.152881917 +0000 UTC m=+844.375824049" observedRunningTime="2026-01-29 12:20:14.698235496 +0000 UTC m=+851.921177628" watchObservedRunningTime="2026-01-29 12:20:14.6990752 +0000 UTC m=+851.922017332" Jan 29 12:20:14 crc kubenswrapper[4660]: I0129 12:20:14.699310 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" podStartSLOduration=9.724787045 podStartE2EDuration="49.699305106s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.790339113 +0000 UTC m=+805.013281245" lastFinishedPulling="2026-01-29 12:20:07.764857174 +0000 UTC m=+844.987799306" observedRunningTime="2026-01-29 12:20:14.673930575 +0000 UTC m=+851.896872707" watchObservedRunningTime="2026-01-29 12:20:14.699305106 +0000 UTC m=+851.922247238" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.584414 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" event={"ID":"379c54b4-ce54-4a69-8c0e-722fa84ed09f","Type":"ContainerStarted","Data":"1d4f0fd6281a80a8532bdec09709e2eeb946d4bd8046cfec696af4eb6f63ad9d"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.585850 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" event={"ID":"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe","Type":"ContainerStarted","Data":"9c71f2a8914a80fde0a5ec4c6af581d5281effea146552625c1521395b9e8558"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.587462 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" event={"ID":"b5d2675b-f392-41fc-8d46-2f7c40e7d69d","Type":"ContainerStarted","Data":"a827df83ebf5324f4454441621c1eab38b0cf81b6c109a05aefae5ab7f6e6e2e"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.587568 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.590246 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" event={"ID":"c301e3ae-90ee-4c00-86be-1e7990da739c","Type":"ContainerStarted","Data":"cbc8fb1dca2fd3e37fa79d1825b6fc53cdc8dcff7b11608758c2d4badeceda63"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.590528 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.592385 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" event={"ID":"6cb62294-b79d-4b84-b197-54a4ab0eeb50","Type":"ContainerStarted","Data":"031a0af1de62514df99688760d8784701fb21c296150d63b6cf03c5a756f14a7"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.592654 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.594253 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" event={"ID":"339f7c0f-fb9f-4ce8-a2be-eb94620e67e8","Type":"ContainerStarted","Data":"728ebe42edd0fa5fb534f8e849a56c9e12caf5e270d20490f60bb07109fdb6e8"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.595798 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" event={"ID":"a2bffb25-e078-4a03-9875-d8a154991b1e","Type":"ContainerStarted","Data":"d331243eef8dd6005513265cff8b8c224f35c4067896535632b495dd5b15dfd7"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.596051 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.597455 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" event={"ID":"531184c9-ac70-494d-9efd-19d8a9022f32","Type":"ContainerStarted","Data":"6174ec98671d8203ed89c7abbc471b8bef7b89afadfd2a68f7eceb8b35ac2016"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.597641 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.599430 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" event={"ID":"16fea8b6-0800-4b2f-abae-8ccbb97dee90","Type":"ContainerStarted","Data":"ca6cb268506441fc6174940e1f17f920d198a91803970982f8ea713b6f7dd126"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.599550 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.601040 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" event={"ID":"693c5d51-c352-44fa-bbe8-8cd0ca86b80b","Type":"ContainerStarted","Data":"f01be2a5ca7512c169dd1a70166f8a1d14deebd26fa3d3850fdfeb8458288c49"} Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.601450 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.615465 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" podStartSLOduration=5.880371916 podStartE2EDuration="49.615444515s" podCreationTimestamp="2026-01-29 12:19:26 +0000 UTC" firstStartedPulling="2026-01-29 12:19:28.466951866 +0000 UTC m=+805.689893998" lastFinishedPulling="2026-01-29 12:20:12.202024445 +0000 UTC m=+849.424966597" observedRunningTime="2026-01-29 12:20:15.608894396 +0000 UTC m=+852.831836538" watchObservedRunningTime="2026-01-29 12:20:15.615444515 +0000 UTC m=+852.838386647" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.638609 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" podStartSLOduration=3.1842496479999998 podStartE2EDuration="50.638594082s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.279013853 +0000 UTC m=+804.501955985" lastFinishedPulling="2026-01-29 12:20:14.733358287 +0000 UTC m=+851.956300419" observedRunningTime="2026-01-29 12:20:15.635799891 +0000 UTC m=+852.858742023" watchObservedRunningTime="2026-01-29 12:20:15.638594082 +0000 UTC m=+852.861536214" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.661229 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" podStartSLOduration=3.2824550869999998 podStartE2EDuration="50.661213463s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.257433033 +0000 UTC m=+804.480375165" lastFinishedPulling="2026-01-29 12:20:14.636191399 +0000 UTC m=+851.859133541" observedRunningTime="2026-01-29 12:20:15.656344423 +0000 UTC m=+852.879286555" watchObservedRunningTime="2026-01-29 12:20:15.661213463 +0000 UTC m=+852.884155595" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.707972 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" podStartSLOduration=3.872230609 podStartE2EDuration="50.70795642s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.867366474 +0000 UTC m=+805.090308606" lastFinishedPulling="2026-01-29 12:20:14.703092285 +0000 UTC m=+851.926034417" observedRunningTime="2026-01-29 12:20:15.697683774 +0000 UTC m=+852.920625906" watchObservedRunningTime="2026-01-29 12:20:15.70795642 +0000 UTC m=+852.930898552" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.818642 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" podStartSLOduration=49.818627067 podStartE2EDuration="49.818627067s" podCreationTimestamp="2026-01-29 12:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:20:15.814147148 +0000 UTC m=+853.037089270" watchObservedRunningTime="2026-01-29 12:20:15.818627067 +0000 UTC m=+853.041569199" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.819386 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w585v" podStartSLOduration=3.628431786 podStartE2EDuration="49.819381299s" podCreationTimestamp="2026-01-29 12:19:26 +0000 UTC" firstStartedPulling="2026-01-29 12:19:28.342453476 +0000 UTC m=+805.565395608" lastFinishedPulling="2026-01-29 12:20:14.533402989 +0000 UTC m=+851.756345121" observedRunningTime="2026-01-29 12:20:15.754930693 +0000 UTC m=+852.977872825" watchObservedRunningTime="2026-01-29 12:20:15.819381299 +0000 UTC m=+853.042323431" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.844202 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.881133 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" podStartSLOduration=4.951026041 podStartE2EDuration="50.881118187s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.304554723 +0000 UTC m=+804.527496855" lastFinishedPulling="2026-01-29 12:20:13.234646849 +0000 UTC m=+850.457589001" observedRunningTime="2026-01-29 12:20:15.877362359 +0000 UTC m=+853.100304481" watchObservedRunningTime="2026-01-29 12:20:15.881118187 +0000 UTC m=+853.104060319" Jan 29 12:20:15 crc kubenswrapper[4660]: I0129 12:20:15.881486 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" podStartSLOduration=3.801805085 podStartE2EDuration="50.881482348s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.625123612 +0000 UTC m=+804.848065744" lastFinishedPulling="2026-01-29 12:20:14.704800875 +0000 UTC m=+851.927743007" observedRunningTime="2026-01-29 12:20:15.844023759 +0000 UTC m=+853.066965891" watchObservedRunningTime="2026-01-29 12:20:15.881482348 +0000 UTC m=+853.104424470" Jan 29 12:20:16 crc kubenswrapper[4660]: I0129 12:20:16.616435 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" event={"ID":"6fc68dc9-a2bd-48a5-b31d-a29ca15489d8","Type":"ContainerStarted","Data":"ba50b7d38bb74b8ff98015d9d4deb188cbcb00d45ef7aa8ef47643584652f5aa"} Jan 29 12:20:16 crc kubenswrapper[4660]: I0129 12:20:16.618723 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:20:16 crc kubenswrapper[4660]: I0129 12:20:16.650407 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" podStartSLOduration=3.535630128 podStartE2EDuration="51.650390066s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.862453547 +0000 UTC m=+805.085395689" lastFinishedPulling="2026-01-29 12:20:15.977213505 +0000 UTC m=+853.200155627" observedRunningTime="2026-01-29 12:20:16.646128433 +0000 UTC m=+853.869070575" watchObservedRunningTime="2026-01-29 12:20:16.650390066 +0000 UTC m=+853.873332198" Jan 29 12:20:18 crc kubenswrapper[4660]: E0129 12:20:18.472087 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podUID="78da1eca-6a33-4825-a671-a348c42a5f3e" Jan 29 12:20:19 crc kubenswrapper[4660]: E0129 12:20:19.473368 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:ff28000b25898d44281f58bdf905dce2e0d59d11f939e85f8843b4a89dd25f10\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podUID="65751935-41e4-46ae-9cc8-c4e5d4193425" Jan 29 12:20:19 crc kubenswrapper[4660]: I0129 12:20:19.476175 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:20:21 crc kubenswrapper[4660]: E0129 12:20:21.471632 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podUID="ae21e403-c97f-4a6b-bb36-867168ab3f60" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.655025 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" event={"ID":"379c54b4-ce54-4a69-8c0e-722fa84ed09f","Type":"ContainerStarted","Data":"4b779a67add6c022114bc75af12f02bdda597f3f770e3ee8fbf7ae9e6210406d"} Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.655121 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.656544 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" event={"ID":"cb8b5e12-4b12-4d5c-b580-faa4aa0140fe","Type":"ContainerStarted","Data":"bb89f4c5fd8e69642d152b8407899fd17e989aac7b2b52ec95631be91baf9f98"} Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.656657 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.657852 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" event={"ID":"b5ec2d08-e2cd-4103-bbab-63de4ecc5902","Type":"ContainerStarted","Data":"97db979bc7c639f2f3398c198dfa86f97102da99f9c2a7fccab29fca96b8eaef"} Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.658025 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.669961 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" podStartSLOduration=50.904647114 podStartE2EDuration="56.669945908s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:20:14.700902652 +0000 UTC m=+851.923844784" lastFinishedPulling="2026-01-29 12:20:20.466201446 +0000 UTC m=+857.689143578" observedRunningTime="2026-01-29 12:20:21.668425145 +0000 UTC m=+858.891367307" watchObservedRunningTime="2026-01-29 12:20:21.669945908 +0000 UTC m=+858.892888040" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.689663 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" podStartSLOduration=3.8762261000000002 podStartE2EDuration="56.689643876s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.653530231 +0000 UTC m=+804.876472363" lastFinishedPulling="2026-01-29 12:20:20.466948007 +0000 UTC m=+857.689890139" observedRunningTime="2026-01-29 12:20:21.68425377 +0000 UTC m=+858.907195922" watchObservedRunningTime="2026-01-29 12:20:21.689643876 +0000 UTC m=+858.912586008" Jan 29 12:20:21 crc kubenswrapper[4660]: I0129 12:20:21.710052 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" podStartSLOduration=50.991659202 podStartE2EDuration="56.710036373s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:20:14.701255373 +0000 UTC m=+851.924197505" lastFinishedPulling="2026-01-29 12:20:20.419632544 +0000 UTC m=+857.642574676" observedRunningTime="2026-01-29 12:20:21.704674359 +0000 UTC m=+858.927616501" watchObservedRunningTime="2026-01-29 12:20:21.710036373 +0000 UTC m=+858.932978505" Jan 29 12:20:22 crc kubenswrapper[4660]: E0129 12:20:22.471337 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podUID="935fa2bb-c3f3-47f1-a316-96b0df84aedc" Jan 29 12:20:25 crc kubenswrapper[4660]: I0129 12:20:25.744876 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-657667746d-nlj9n" Jan 29 12:20:25 crc kubenswrapper[4660]: I0129 12:20:25.801045 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-jcsnd" Jan 29 12:20:25 crc kubenswrapper[4660]: I0129 12:20:25.806856 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-p8t4z" Jan 29 12:20:25 crc kubenswrapper[4660]: I0129 12:20:25.852565 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xnhss" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.122825 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-c2w2h" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.169881 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-2g5t6" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.178523 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-hqvr9" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.228402 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lbsb9" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.240834 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-n5x67" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.298974 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-4pcpn" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.365444 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-gmw7z" Jan 29 12:20:26 crc kubenswrapper[4660]: I0129 12:20:26.671122 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-f4gll" Jan 29 12:20:27 crc kubenswrapper[4660]: I0129 12:20:27.307476 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4vfnd" Jan 29 12:20:27 crc kubenswrapper[4660]: I0129 12:20:27.421310 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-zbpsq" Jan 29 12:20:28 crc kubenswrapper[4660]: I0129 12:20:28.718124 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" event={"ID":"6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00","Type":"ContainerStarted","Data":"f14414c9c97f22601835dd70c2fb183d3c6fbd21b0ee5b097eba2948911ce532"} Jan 29 12:20:28 crc kubenswrapper[4660]: I0129 12:20:28.718660 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:20:28 crc kubenswrapper[4660]: I0129 12:20:28.733795 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" podStartSLOduration=3.712038459 podStartE2EDuration="1m3.733777781s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.862222011 +0000 UTC m=+805.085164133" lastFinishedPulling="2026-01-29 12:20:27.883961323 +0000 UTC m=+865.106903455" observedRunningTime="2026-01-29 12:20:28.732473243 +0000 UTC m=+865.955415375" watchObservedRunningTime="2026-01-29 12:20:28.733777781 +0000 UTC m=+865.956719913" Jan 29 12:20:29 crc kubenswrapper[4660]: I0129 12:20:29.162251 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-v6vw6" Jan 29 12:20:31 crc kubenswrapper[4660]: I0129 12:20:31.737403 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" event={"ID":"78da1eca-6a33-4825-a671-a348c42a5f3e","Type":"ContainerStarted","Data":"aabcc626179ef2090bfb172fc6d665bae31bfd5a3bd3bb2a599b199a00afbda0"} Jan 29 12:20:31 crc kubenswrapper[4660]: I0129 12:20:31.737991 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:20:31 crc kubenswrapper[4660]: I0129 12:20:31.756297 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" podStartSLOduration=4.287109898 podStartE2EDuration="1m6.75627871s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:28.448850193 +0000 UTC m=+805.671792325" lastFinishedPulling="2026-01-29 12:20:30.918019005 +0000 UTC m=+868.140961137" observedRunningTime="2026-01-29 12:20:31.750598776 +0000 UTC m=+868.973540908" watchObservedRunningTime="2026-01-29 12:20:31.75627871 +0000 UTC m=+868.979220842" Jan 29 12:20:31 crc kubenswrapper[4660]: I0129 12:20:31.862334 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-jstjj" Jan 29 12:20:32 crc kubenswrapper[4660]: I0129 12:20:32.375640 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9" Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.760482 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" event={"ID":"65751935-41e4-46ae-9cc8-c4e5d4193425","Type":"ContainerStarted","Data":"45518acd080f04d82233a193a33af58bdfd9fb0a012c6be758ae3053148236cd"} Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.762870 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.764400 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" event={"ID":"935fa2bb-c3f3-47f1-a316-96b0df84aedc","Type":"ContainerStarted","Data":"4a8bbc6807dafe1d5aa0a251aea5fffebd561dce3a18fcd7fe5917de93cf414e"} Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.764802 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.787290 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" podStartSLOduration=4.034580428 podStartE2EDuration="1m9.787264854s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.883621376 +0000 UTC m=+805.106563508" lastFinishedPulling="2026-01-29 12:20:33.636305802 +0000 UTC m=+870.859247934" observedRunningTime="2026-01-29 12:20:34.783392642 +0000 UTC m=+872.006334774" watchObservedRunningTime="2026-01-29 12:20:34.787264854 +0000 UTC m=+872.010206986" Jan 29 12:20:34 crc kubenswrapper[4660]: I0129 12:20:34.810650 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" podStartSLOduration=3.7310575679999998 podStartE2EDuration="1m9.810628477s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.890159167 +0000 UTC m=+805.113101299" lastFinishedPulling="2026-01-29 12:20:33.969730076 +0000 UTC m=+871.192672208" observedRunningTime="2026-01-29 12:20:34.80655373 +0000 UTC m=+872.029495882" watchObservedRunningTime="2026-01-29 12:20:34.810628477 +0000 UTC m=+872.033570609" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.080131 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.082007 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.106097 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.160710 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.160795 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.160853 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9ht\" (UniqueName: \"kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.262199 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.262272 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.262872 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.262916 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t9ht\" (UniqueName: \"kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.262968 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.284342 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t9ht\" (UniqueName: \"kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht\") pod \"certified-operators-c5j6x\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.400856 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.786532 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" event={"ID":"ae21e403-c97f-4a6b-bb36-867168ab3f60","Type":"ContainerStarted","Data":"55365b65ca40447fca4fd75cabbdf9525c2130921ae2079647354b727f6e861f"} Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.787209 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:20:35 crc kubenswrapper[4660]: I0129 12:20:35.827437 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:36 crc kubenswrapper[4660]: I0129 12:20:36.323518 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-ncfnn" Jan 29 12:20:36 crc kubenswrapper[4660]: I0129 12:20:36.364596 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" podStartSLOduration=4.3357162240000005 podStartE2EDuration="1m11.364581186s" podCreationTimestamp="2026-01-29 12:19:25 +0000 UTC" firstStartedPulling="2026-01-29 12:19:27.890389554 +0000 UTC m=+805.113331686" lastFinishedPulling="2026-01-29 12:20:34.919254516 +0000 UTC m=+872.142196648" observedRunningTime="2026-01-29 12:20:35.84739592 +0000 UTC m=+873.070338052" watchObservedRunningTime="2026-01-29 12:20:36.364581186 +0000 UTC m=+873.587523318" Jan 29 12:20:36 crc kubenswrapper[4660]: I0129 12:20:36.834979 4660 generic.go:334] "Generic (PLEG): container finished" podID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerID="3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91" exitCode=0 Jan 29 12:20:36 crc kubenswrapper[4660]: I0129 12:20:36.835766 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerDied","Data":"3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91"} Jan 29 12:20:36 crc kubenswrapper[4660]: I0129 12:20:36.835793 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerStarted","Data":"ede5ac84b5f9ab48f2974b1fa79a6652266cd40a571e95c174be101ea80c351f"} Jan 29 12:20:37 crc kubenswrapper[4660]: I0129 12:20:37.459376 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-g2z99" Jan 29 12:20:37 crc kubenswrapper[4660]: I0129 12:20:37.843991 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerStarted","Data":"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c"} Jan 29 12:20:38 crc kubenswrapper[4660]: I0129 12:20:38.852586 4660 generic.go:334] "Generic (PLEG): container finished" podID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerID="74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c" exitCode=0 Jan 29 12:20:38 crc kubenswrapper[4660]: I0129 12:20:38.852652 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerDied","Data":"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c"} Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.653716 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.655314 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.666505 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.759997 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv2tn\" (UniqueName: \"kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.760086 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.760132 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.861283 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv2tn\" (UniqueName: \"kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.861376 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.861427 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.862041 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.866226 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.873130 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerStarted","Data":"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96"} Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.911643 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c5j6x" podStartSLOduration=2.962033119 podStartE2EDuration="5.911602598s" podCreationTimestamp="2026-01-29 12:20:35 +0000 UTC" firstStartedPulling="2026-01-29 12:20:36.839913398 +0000 UTC m=+874.062855530" lastFinishedPulling="2026-01-29 12:20:39.789482877 +0000 UTC m=+877.012425009" observedRunningTime="2026-01-29 12:20:40.904332379 +0000 UTC m=+878.127274511" watchObservedRunningTime="2026-01-29 12:20:40.911602598 +0000 UTC m=+878.134544730" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.913059 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv2tn\" (UniqueName: \"kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn\") pod \"community-operators-fxb8t\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:40 crc kubenswrapper[4660]: I0129 12:20:40.973873 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:41 crc kubenswrapper[4660]: I0129 12:20:41.453334 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:41 crc kubenswrapper[4660]: I0129 12:20:41.880039 4660 generic.go:334] "Generic (PLEG): container finished" podID="1a163b04-9b94-476d-b596-e37eb463554c" containerID="080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7" exitCode=0 Jan 29 12:20:41 crc kubenswrapper[4660]: I0129 12:20:41.880094 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerDied","Data":"080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7"} Jan 29 12:20:41 crc kubenswrapper[4660]: I0129 12:20:41.880161 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerStarted","Data":"c6ac0f57989bacb63d7a6723f644c186511ac18e808b0ed5450aab474f39c542"} Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.401542 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.402917 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.456576 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.907871 4660 generic.go:334] "Generic (PLEG): container finished" podID="1a163b04-9b94-476d-b596-e37eb463554c" containerID="f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5" exitCode=0 Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.907931 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerDied","Data":"f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5"} Jan 29 12:20:45 crc kubenswrapper[4660]: I0129 12:20:45.976557 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:46 crc kubenswrapper[4660]: I0129 12:20:46.415152 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-sgm4h" Jan 29 12:20:46 crc kubenswrapper[4660]: I0129 12:20:46.515472 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-dbf4c" Jan 29 12:20:46 crc kubenswrapper[4660]: I0129 12:20:46.745117 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-vhg9t" Jan 29 12:20:46 crc kubenswrapper[4660]: I0129 12:20:46.915966 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerStarted","Data":"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e"} Jan 29 12:20:46 crc kubenswrapper[4660]: I0129 12:20:46.935766 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fxb8t" podStartSLOduration=2.50521881 podStartE2EDuration="6.935746436s" podCreationTimestamp="2026-01-29 12:20:40 +0000 UTC" firstStartedPulling="2026-01-29 12:20:41.882377171 +0000 UTC m=+879.105319303" lastFinishedPulling="2026-01-29 12:20:46.312904797 +0000 UTC m=+883.535846929" observedRunningTime="2026-01-29 12:20:46.933263225 +0000 UTC m=+884.156205357" watchObservedRunningTime="2026-01-29 12:20:46.935746436 +0000 UTC m=+884.158688568" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.240383 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.240597 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c5j6x" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="registry-server" containerID="cri-o://f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96" gracePeriod=2 Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.649084 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.688757 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities\") pod \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.688889 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t9ht\" (UniqueName: \"kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht\") pod \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.690311 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content\") pod \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\" (UID: \"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4\") " Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.690507 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities" (OuterVolumeSpecName: "utilities") pod "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" (UID: "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.691482 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.711147 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht" (OuterVolumeSpecName: "kube-api-access-5t9ht") pod "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" (UID: "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4"). InnerVolumeSpecName "kube-api-access-5t9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.793259 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t9ht\" (UniqueName: \"kubernetes.io/projected/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-kube-api-access-5t9ht\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.931190 4660 generic.go:334] "Generic (PLEG): container finished" podID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerID="f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96" exitCode=0 Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.931245 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerDied","Data":"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96"} Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.931302 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5j6x" event={"ID":"e3a0822a-ce17-4cc0-93bf-2143b62f2bf4","Type":"ContainerDied","Data":"ede5ac84b5f9ab48f2974b1fa79a6652266cd40a571e95c174be101ea80c351f"} Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.931330 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5j6x" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.931356 4660 scope.go:117] "RemoveContainer" containerID="f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.953624 4660 scope.go:117] "RemoveContainer" containerID="74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c" Jan 29 12:20:48 crc kubenswrapper[4660]: I0129 12:20:48.978859 4660 scope.go:117] "RemoveContainer" containerID="3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.006731 4660 scope.go:117] "RemoveContainer" containerID="f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96" Jan 29 12:20:49 crc kubenswrapper[4660]: E0129 12:20:49.007333 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96\": container with ID starting with f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96 not found: ID does not exist" containerID="f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.007391 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96"} err="failed to get container status \"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96\": rpc error: code = NotFound desc = could not find container \"f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96\": container with ID starting with f68d3f6431d5bb496dbe003ba8338f3c36cee1464712876e16203e2d54c65e96 not found: ID does not exist" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.007426 4660 scope.go:117] "RemoveContainer" containerID="74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c" Jan 29 12:20:49 crc kubenswrapper[4660]: E0129 12:20:49.007898 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c\": container with ID starting with 74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c not found: ID does not exist" containerID="74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.007931 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c"} err="failed to get container status \"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c\": rpc error: code = NotFound desc = could not find container \"74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c\": container with ID starting with 74609a44fc6d7063c13ddd264885dfeca3dff19ad7b6e5adbdf42915127b4d1c not found: ID does not exist" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.007953 4660 scope.go:117] "RemoveContainer" containerID="3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91" Jan 29 12:20:49 crc kubenswrapper[4660]: E0129 12:20:49.008347 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91\": container with ID starting with 3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91 not found: ID does not exist" containerID="3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.008375 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91"} err="failed to get container status \"3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91\": rpc error: code = NotFound desc = could not find container \"3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91\": container with ID starting with 3ef4cace577852d8fc66e84742f74db545bb81e3491bb573e0d1a971866f9d91 not found: ID does not exist" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.553613 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" (UID: "e3a0822a-ce17-4cc0-93bf-2143b62f2bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.605090 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.861951 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:49 crc kubenswrapper[4660]: I0129 12:20:49.868810 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c5j6x"] Jan 29 12:20:50 crc kubenswrapper[4660]: I0129 12:20:50.975051 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:50 crc kubenswrapper[4660]: I0129 12:20:50.975105 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:51 crc kubenswrapper[4660]: I0129 12:20:51.023310 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:51 crc kubenswrapper[4660]: I0129 12:20:51.483615 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" path="/var/lib/kubelet/pods/e3a0822a-ce17-4cc0-93bf-2143b62f2bf4/volumes" Jan 29 12:20:51 crc kubenswrapper[4660]: I0129 12:20:51.988783 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:52 crc kubenswrapper[4660]: I0129 12:20:52.845970 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:53 crc kubenswrapper[4660]: I0129 12:20:53.960839 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fxb8t" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="registry-server" containerID="cri-o://3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e" gracePeriod=2 Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.395771 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.470554 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content\") pod \"1a163b04-9b94-476d-b596-e37eb463554c\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.470619 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv2tn\" (UniqueName: \"kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn\") pod \"1a163b04-9b94-476d-b596-e37eb463554c\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.470659 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities\") pod \"1a163b04-9b94-476d-b596-e37eb463554c\" (UID: \"1a163b04-9b94-476d-b596-e37eb463554c\") " Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.471735 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities" (OuterVolumeSpecName: "utilities") pod "1a163b04-9b94-476d-b596-e37eb463554c" (UID: "1a163b04-9b94-476d-b596-e37eb463554c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.481963 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn" (OuterVolumeSpecName: "kube-api-access-xv2tn") pod "1a163b04-9b94-476d-b596-e37eb463554c" (UID: "1a163b04-9b94-476d-b596-e37eb463554c"). InnerVolumeSpecName "kube-api-access-xv2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.529007 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a163b04-9b94-476d-b596-e37eb463554c" (UID: "1a163b04-9b94-476d-b596-e37eb463554c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.571726 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.571760 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv2tn\" (UniqueName: \"kubernetes.io/projected/1a163b04-9b94-476d-b596-e37eb463554c-kube-api-access-xv2tn\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.571777 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a163b04-9b94-476d-b596-e37eb463554c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.968868 4660 generic.go:334] "Generic (PLEG): container finished" podID="1a163b04-9b94-476d-b596-e37eb463554c" containerID="3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e" exitCode=0 Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.968927 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerDied","Data":"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e"} Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.969999 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxb8t" event={"ID":"1a163b04-9b94-476d-b596-e37eb463554c","Type":"ContainerDied","Data":"c6ac0f57989bacb63d7a6723f644c186511ac18e808b0ed5450aab474f39c542"} Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.968992 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxb8t" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.970061 4660 scope.go:117] "RemoveContainer" containerID="3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e" Jan 29 12:20:54 crc kubenswrapper[4660]: I0129 12:20:54.993465 4660 scope.go:117] "RemoveContainer" containerID="f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.007165 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.013900 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fxb8t"] Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.027162 4660 scope.go:117] "RemoveContainer" containerID="080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.041277 4660 scope.go:117] "RemoveContainer" containerID="3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e" Jan 29 12:20:55 crc kubenswrapper[4660]: E0129 12:20:55.041647 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e\": container with ID starting with 3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e not found: ID does not exist" containerID="3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.041862 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e"} err="failed to get container status \"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e\": rpc error: code = NotFound desc = could not find container \"3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e\": container with ID starting with 3bf8ce799a4553a90b64d937c54958b9eb73513395b7d6640b47e83416caed8e not found: ID does not exist" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.041952 4660 scope.go:117] "RemoveContainer" containerID="f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5" Jan 29 12:20:55 crc kubenswrapper[4660]: E0129 12:20:55.042234 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5\": container with ID starting with f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5 not found: ID does not exist" containerID="f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.042256 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5"} err="failed to get container status \"f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5\": rpc error: code = NotFound desc = could not find container \"f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5\": container with ID starting with f76d8dfdcd82189b9db4390eeb81d141d7449bcb58500ac0e96eb70c7d79bec5 not found: ID does not exist" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.042287 4660 scope.go:117] "RemoveContainer" containerID="080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7" Jan 29 12:20:55 crc kubenswrapper[4660]: E0129 12:20:55.042550 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7\": container with ID starting with 080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7 not found: ID does not exist" containerID="080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.042624 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7"} err="failed to get container status \"080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7\": rpc error: code = NotFound desc = could not find container \"080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7\": container with ID starting with 080733205ba40d7c60c0d8e512329ddd04b6ceb83490f5fbe87f5367ffca6bc7 not found: ID does not exist" Jan 29 12:20:55 crc kubenswrapper[4660]: I0129 12:20:55.480110 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a163b04-9b94-476d-b596-e37eb463554c" path="/var/lib/kubelet/pods/1a163b04-9b94-476d-b596-e37eb463554c/volumes" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043206 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043805 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043818 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043837 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="extract-utilities" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043843 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="extract-utilities" Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043852 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="extract-content" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043860 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="extract-content" Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043873 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="extract-utilities" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043879 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="extract-utilities" Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043888 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="extract-content" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043894 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="extract-content" Jan 29 12:21:00 crc kubenswrapper[4660]: E0129 12:21:00.043909 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.043914 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.044042 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a0822a-ce17-4cc0-93bf-2143b62f2bf4" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.044061 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a163b04-9b94-476d-b596-e37eb463554c" containerName="registry-server" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.044991 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.061869 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.149070 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgkr\" (UniqueName: \"kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.149165 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.149202 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.250900 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgkr\" (UniqueName: \"kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.250959 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.250992 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.251403 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.251818 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.275031 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgkr\" (UniqueName: \"kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr\") pod \"redhat-marketplace-8jvcg\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.386153 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:00 crc kubenswrapper[4660]: I0129 12:21:00.831381 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:01 crc kubenswrapper[4660]: I0129 12:21:01.011321 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerStarted","Data":"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe"} Jan 29 12:21:01 crc kubenswrapper[4660]: I0129 12:21:01.011534 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerStarted","Data":"e8532545482c87538ca0ebc7128b8497bfaf7614e5d31de1b3243cda33559618"} Jan 29 12:21:02 crc kubenswrapper[4660]: I0129 12:21:02.022166 4660 generic.go:334] "Generic (PLEG): container finished" podID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerID="6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe" exitCode=0 Jan 29 12:21:02 crc kubenswrapper[4660]: I0129 12:21:02.022299 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerDied","Data":"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe"} Jan 29 12:21:03 crc kubenswrapper[4660]: I0129 12:21:03.031216 4660 generic.go:334] "Generic (PLEG): container finished" podID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerID="4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186" exitCode=0 Jan 29 12:21:03 crc kubenswrapper[4660]: I0129 12:21:03.031415 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerDied","Data":"4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186"} Jan 29 12:21:05 crc kubenswrapper[4660]: I0129 12:21:05.045786 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerStarted","Data":"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc"} Jan 29 12:21:05 crc kubenswrapper[4660]: I0129 12:21:05.066577 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jvcg" podStartSLOduration=2.162920477 podStartE2EDuration="5.066558682s" podCreationTimestamp="2026-01-29 12:21:00 +0000 UTC" firstStartedPulling="2026-01-29 12:21:01.012585924 +0000 UTC m=+898.235528056" lastFinishedPulling="2026-01-29 12:21:03.916224129 +0000 UTC m=+901.139166261" observedRunningTime="2026-01-29 12:21:05.062489455 +0000 UTC m=+902.285431597" watchObservedRunningTime="2026-01-29 12:21:05.066558682 +0000 UTC m=+902.289500824" Jan 29 12:21:10 crc kubenswrapper[4660]: I0129 12:21:10.386932 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:10 crc kubenswrapper[4660]: I0129 12:21:10.387953 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:10 crc kubenswrapper[4660]: I0129 12:21:10.425719 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:11 crc kubenswrapper[4660]: I0129 12:21:11.137475 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:11 crc kubenswrapper[4660]: I0129 12:21:11.664372 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.116209 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8jvcg" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="registry-server" containerID="cri-o://50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc" gracePeriod=2 Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.702759 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.743616 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content\") pod \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.743776 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgkr\" (UniqueName: \"kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr\") pod \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.743924 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities\") pod \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\" (UID: \"c437d11f-4c5d-4d96-8f72-89d19d89ba2f\") " Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.745682 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities" (OuterVolumeSpecName: "utilities") pod "c437d11f-4c5d-4d96-8f72-89d19d89ba2f" (UID: "c437d11f-4c5d-4d96-8f72-89d19d89ba2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.759834 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr" (OuterVolumeSpecName: "kube-api-access-wsgkr") pod "c437d11f-4c5d-4d96-8f72-89d19d89ba2f" (UID: "c437d11f-4c5d-4d96-8f72-89d19d89ba2f"). InnerVolumeSpecName "kube-api-access-wsgkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.782956 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c437d11f-4c5d-4d96-8f72-89d19d89ba2f" (UID: "c437d11f-4c5d-4d96-8f72-89d19d89ba2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.845124 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.845167 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgkr\" (UniqueName: \"kubernetes.io/projected/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-kube-api-access-wsgkr\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:13 crc kubenswrapper[4660]: I0129 12:21:13.845179 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c437d11f-4c5d-4d96-8f72-89d19d89ba2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.125169 4660 generic.go:334] "Generic (PLEG): container finished" podID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerID="50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc" exitCode=0 Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.125214 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerDied","Data":"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc"} Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.125244 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jvcg" event={"ID":"c437d11f-4c5d-4d96-8f72-89d19d89ba2f","Type":"ContainerDied","Data":"e8532545482c87538ca0ebc7128b8497bfaf7614e5d31de1b3243cda33559618"} Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.125264 4660 scope.go:117] "RemoveContainer" containerID="50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.125398 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jvcg" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.152789 4660 scope.go:117] "RemoveContainer" containerID="4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.157269 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.167142 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jvcg"] Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.176007 4660 scope.go:117] "RemoveContainer" containerID="6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.200321 4660 scope.go:117] "RemoveContainer" containerID="50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc" Jan 29 12:21:14 crc kubenswrapper[4660]: E0129 12:21:14.200834 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc\": container with ID starting with 50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc not found: ID does not exist" containerID="50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.200882 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc"} err="failed to get container status \"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc\": rpc error: code = NotFound desc = could not find container \"50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc\": container with ID starting with 50e825c4214c4696c615219b0e6f7b5ccc95833ed1852d35b31c001d6c782abc not found: ID does not exist" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.200911 4660 scope.go:117] "RemoveContainer" containerID="4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186" Jan 29 12:21:14 crc kubenswrapper[4660]: E0129 12:21:14.201380 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186\": container with ID starting with 4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186 not found: ID does not exist" containerID="4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.201438 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186"} err="failed to get container status \"4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186\": rpc error: code = NotFound desc = could not find container \"4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186\": container with ID starting with 4f591b5ea375af856c1ce433ecedae185bf50bfe8309383fe95275ca0c7aa186 not found: ID does not exist" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.201465 4660 scope.go:117] "RemoveContainer" containerID="6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe" Jan 29 12:21:14 crc kubenswrapper[4660]: E0129 12:21:14.201776 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe\": container with ID starting with 6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe not found: ID does not exist" containerID="6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe" Jan 29 12:21:14 crc kubenswrapper[4660]: I0129 12:21:14.201809 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe"} err="failed to get container status \"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe\": rpc error: code = NotFound desc = could not find container \"6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe\": container with ID starting with 6d7e26c291064e9e0bb4655b24f76e7819e006850dc1e29173097e93c44c77fe not found: ID does not exist" Jan 29 12:21:15 crc kubenswrapper[4660]: I0129 12:21:15.478017 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" path="/var/lib/kubelet/pods/c437d11f-4c5d-4d96-8f72-89d19d89ba2f/volumes" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.874464 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:18 crc kubenswrapper[4660]: E0129 12:21:18.875132 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="extract-utilities" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.875149 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="extract-utilities" Jan 29 12:21:18 crc kubenswrapper[4660]: E0129 12:21:18.875162 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="registry-server" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.875170 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="registry-server" Jan 29 12:21:18 crc kubenswrapper[4660]: E0129 12:21:18.875187 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="extract-content" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.875196 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="extract-content" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.875360 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c437d11f-4c5d-4d96-8f72-89d19d89ba2f" containerName="registry-server" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.876544 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.886911 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.910538 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtwq\" (UniqueName: \"kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.910656 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:18 crc kubenswrapper[4660]: I0129 12:21:18.910707 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.011550 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.011606 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtwq\" (UniqueName: \"kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.012079 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.012166 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.012456 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.032835 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtwq\" (UniqueName: \"kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq\") pod \"redhat-operators-k9jc9\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.193810 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:19 crc kubenswrapper[4660]: I0129 12:21:19.648669 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:19 crc kubenswrapper[4660]: W0129 12:21:19.658077 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec1cd2e_37bc_4e99_ad21_b4b70cf7ce7e.slice/crio-2f3468f95739494025c8438e10f6f02186ba40a213be2e1ddfc07e193fb2552b WatchSource:0}: Error finding container 2f3468f95739494025c8438e10f6f02186ba40a213be2e1ddfc07e193fb2552b: Status 404 returned error can't find the container with id 2f3468f95739494025c8438e10f6f02186ba40a213be2e1ddfc07e193fb2552b Jan 29 12:21:20 crc kubenswrapper[4660]: I0129 12:21:20.164378 4660 generic.go:334] "Generic (PLEG): container finished" podID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerID="aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5" exitCode=0 Jan 29 12:21:20 crc kubenswrapper[4660]: I0129 12:21:20.164457 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerDied","Data":"aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5"} Jan 29 12:21:20 crc kubenswrapper[4660]: I0129 12:21:20.164734 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerStarted","Data":"2f3468f95739494025c8438e10f6f02186ba40a213be2e1ddfc07e193fb2552b"} Jan 29 12:21:22 crc kubenswrapper[4660]: I0129 12:21:22.176757 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerStarted","Data":"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030"} Jan 29 12:21:23 crc kubenswrapper[4660]: I0129 12:21:23.184729 4660 generic.go:334] "Generic (PLEG): container finished" podID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerID="c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030" exitCode=0 Jan 29 12:21:23 crc kubenswrapper[4660]: I0129 12:21:23.184783 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerDied","Data":"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030"} Jan 29 12:21:25 crc kubenswrapper[4660]: I0129 12:21:25.201122 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerStarted","Data":"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c"} Jan 29 12:21:25 crc kubenswrapper[4660]: I0129 12:21:25.227122 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9jc9" podStartSLOduration=3.271886098 podStartE2EDuration="7.227101333s" podCreationTimestamp="2026-01-29 12:21:18 +0000 UTC" firstStartedPulling="2026-01-29 12:21:20.165636583 +0000 UTC m=+917.388578715" lastFinishedPulling="2026-01-29 12:21:24.120851818 +0000 UTC m=+921.343793950" observedRunningTime="2026-01-29 12:21:25.223777257 +0000 UTC m=+922.446719389" watchObservedRunningTime="2026-01-29 12:21:25.227101333 +0000 UTC m=+922.450043465" Jan 29 12:21:29 crc kubenswrapper[4660]: I0129 12:21:29.194545 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:29 crc kubenswrapper[4660]: I0129 12:21:29.194867 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:30 crc kubenswrapper[4660]: I0129 12:21:30.242867 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9jc9" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="registry-server" probeResult="failure" output=< Jan 29 12:21:30 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Jan 29 12:21:30 crc kubenswrapper[4660]: > Jan 29 12:21:39 crc kubenswrapper[4660]: I0129 12:21:39.228603 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:39 crc kubenswrapper[4660]: I0129 12:21:39.278928 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:39 crc kubenswrapper[4660]: I0129 12:21:39.458879 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.304852 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9jc9" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="registry-server" containerID="cri-o://e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c" gracePeriod=2 Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.705207 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.896164 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities\") pod \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.896239 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content\") pod \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.896342 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqtwq\" (UniqueName: \"kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq\") pod \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\" (UID: \"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e\") " Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.898317 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities" (OuterVolumeSpecName: "utilities") pod "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" (UID: "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.904904 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq" (OuterVolumeSpecName: "kube-api-access-bqtwq") pod "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" (UID: "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e"). InnerVolumeSpecName "kube-api-access-bqtwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.998391 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqtwq\" (UniqueName: \"kubernetes.io/projected/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-kube-api-access-bqtwq\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:40 crc kubenswrapper[4660]: I0129 12:21:40.998441 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.023355 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" (UID: "9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.099417 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.313111 4660 generic.go:334] "Generic (PLEG): container finished" podID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerID="e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c" exitCode=0 Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.313145 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerDied","Data":"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c"} Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.313207 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9jc9" event={"ID":"9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e","Type":"ContainerDied","Data":"2f3468f95739494025c8438e10f6f02186ba40a213be2e1ddfc07e193fb2552b"} Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.313125 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9jc9" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.313225 4660 scope.go:117] "RemoveContainer" containerID="e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.332370 4660 scope.go:117] "RemoveContainer" containerID="c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.353062 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.359192 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9jc9"] Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.368609 4660 scope.go:117] "RemoveContainer" containerID="aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.384526 4660 scope.go:117] "RemoveContainer" containerID="e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c" Jan 29 12:21:41 crc kubenswrapper[4660]: E0129 12:21:41.385006 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c\": container with ID starting with e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c not found: ID does not exist" containerID="e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.385068 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c"} err="failed to get container status \"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c\": rpc error: code = NotFound desc = could not find container \"e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c\": container with ID starting with e73725709d32787bc1127d37612cf175dab2b2e9f8add80c4fca4aa226181c8c not found: ID does not exist" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.385097 4660 scope.go:117] "RemoveContainer" containerID="c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030" Jan 29 12:21:41 crc kubenswrapper[4660]: E0129 12:21:41.385491 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030\": container with ID starting with c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030 not found: ID does not exist" containerID="c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.385524 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030"} err="failed to get container status \"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030\": rpc error: code = NotFound desc = could not find container \"c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030\": container with ID starting with c20d22b1b7322f62b03936ae42fb1689da6110a031b867b1249465529e123030 not found: ID does not exist" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.385544 4660 scope.go:117] "RemoveContainer" containerID="aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5" Jan 29 12:21:41 crc kubenswrapper[4660]: E0129 12:21:41.385882 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5\": container with ID starting with aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5 not found: ID does not exist" containerID="aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.385925 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5"} err="failed to get container status \"aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5\": rpc error: code = NotFound desc = could not find container \"aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5\": container with ID starting with aad16e47976f10eef4f7ace87de687cdb4083ca3339ca461a2c18e85112ad8d5 not found: ID does not exist" Jan 29 12:21:41 crc kubenswrapper[4660]: I0129 12:21:41.478086 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" path="/var/lib/kubelet/pods/9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e/volumes" Jan 29 12:21:56 crc kubenswrapper[4660]: I0129 12:21:56.269882 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:21:56 crc kubenswrapper[4660]: I0129 12:21:56.270501 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:22:26 crc kubenswrapper[4660]: I0129 12:22:26.269620 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:22:26 crc kubenswrapper[4660]: I0129 12:22:26.270292 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.269977 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.271059 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.271126 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.272191 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.272292 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6" gracePeriod=600 Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.581715 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6" exitCode=0 Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.581774 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6"} Jan 29 12:22:56 crc kubenswrapper[4660]: I0129 12:22:56.582237 4660 scope.go:117] "RemoveContainer" containerID="2c16cbdb8ef8617300de27cd5dec6cb5c0e6298a291719baa852d3f0432803d5" Jan 29 12:22:57 crc kubenswrapper[4660]: I0129 12:22:57.591384 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110"} Jan 29 12:24:56 crc kubenswrapper[4660]: I0129 12:24:56.269244 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:24:56 crc kubenswrapper[4660]: I0129 12:24:56.271671 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:25:26 crc kubenswrapper[4660]: I0129 12:25:26.269618 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:25:26 crc kubenswrapper[4660]: I0129 12:25:26.271440 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.269260 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.269937 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.269992 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.270716 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.270841 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110" gracePeriod=600 Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.928332 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110" exitCode=0 Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.928403 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110"} Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.928988 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a"} Jan 29 12:25:56 crc kubenswrapper[4660]: I0129 12:25:56.929010 4660 scope.go:117] "RemoveContainer" containerID="3d1ba41f044c335ec471707e50a492b9e84168653c9eecc671750f18275596b6" Jan 29 12:27:56 crc kubenswrapper[4660]: I0129 12:27:56.269143 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:27:56 crc kubenswrapper[4660]: I0129 12:27:56.269695 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:28:26 crc kubenswrapper[4660]: I0129 12:28:26.268881 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:28:26 crc kubenswrapper[4660]: I0129 12:28:26.269964 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:28:56 crc kubenswrapper[4660]: I0129 12:28:56.269236 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:28:56 crc kubenswrapper[4660]: I0129 12:28:56.269781 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:28:56 crc kubenswrapper[4660]: I0129 12:28:56.269824 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:28:56 crc kubenswrapper[4660]: I0129 12:28:56.270417 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:28:56 crc kubenswrapper[4660]: I0129 12:28:56.270473 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a" gracePeriod=600 Jan 29 12:28:57 crc kubenswrapper[4660]: I0129 12:28:57.113483 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a" exitCode=0 Jan 29 12:28:57 crc kubenswrapper[4660]: I0129 12:28:57.113558 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a"} Jan 29 12:28:57 crc kubenswrapper[4660]: I0129 12:28:57.114059 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c"} Jan 29 12:28:57 crc kubenswrapper[4660]: I0129 12:28:57.114083 4660 scope.go:117] "RemoveContainer" containerID="37b4760f23490ccea5a09492c529ce5f6a2dc9ef4956bf11b5d4c340fdbba110" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.179447 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7"] Jan 29 12:30:00 crc kubenswrapper[4660]: E0129 12:30:00.181485 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="extract-content" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.181588 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="extract-content" Jan 29 12:30:00 crc kubenswrapper[4660]: E0129 12:30:00.181683 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="extract-utilities" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.181790 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="extract-utilities" Jan 29 12:30:00 crc kubenswrapper[4660]: E0129 12:30:00.181960 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.182040 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.182287 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec1cd2e-37bc-4e99-ad21-b4b70cf7ce7e" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.182856 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.185906 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.196324 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7"] Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.198584 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.320074 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjn7w\" (UniqueName: \"kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.320146 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.320220 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.421777 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjn7w\" (UniqueName: \"kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.421823 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.421879 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.422899 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.427982 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.441678 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjn7w\" (UniqueName: \"kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w\") pod \"collect-profiles-29494830-9z5k7\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.516807 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:00 crc kubenswrapper[4660]: I0129 12:30:00.944046 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7"] Jan 29 12:30:01 crc kubenswrapper[4660]: I0129 12:30:01.524677 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" event={"ID":"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639","Type":"ContainerStarted","Data":"e574ad16004ccda7140fce088a981e0550833d9ed2b509884eda75f525e52de2"} Jan 29 12:30:01 crc kubenswrapper[4660]: I0129 12:30:01.525304 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" event={"ID":"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639","Type":"ContainerStarted","Data":"bdbdec7ff6644a0885e938fccf46887a7dee979d69c96d19a3e1232ea0e3abdd"} Jan 29 12:30:01 crc kubenswrapper[4660]: I0129 12:30:01.536470 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" podStartSLOduration=1.5364536439999998 podStartE2EDuration="1.536453644s" podCreationTimestamp="2026-01-29 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:30:01.534728684 +0000 UTC m=+1438.757670816" watchObservedRunningTime="2026-01-29 12:30:01.536453644 +0000 UTC m=+1438.759395776" Jan 29 12:30:02 crc kubenswrapper[4660]: I0129 12:30:02.531658 4660 generic.go:334] "Generic (PLEG): container finished" podID="f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" containerID="e574ad16004ccda7140fce088a981e0550833d9ed2b509884eda75f525e52de2" exitCode=0 Jan 29 12:30:02 crc kubenswrapper[4660]: I0129 12:30:02.532145 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" event={"ID":"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639","Type":"ContainerDied","Data":"e574ad16004ccda7140fce088a981e0550833d9ed2b509884eda75f525e52de2"} Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.801887 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.873642 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume\") pod \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.873734 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjn7w\" (UniqueName: \"kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w\") pod \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.873835 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume\") pod \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\" (UID: \"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639\") " Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.874464 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume" (OuterVolumeSpecName: "config-volume") pod "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" (UID: "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.874893 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.880844 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w" (OuterVolumeSpecName: "kube-api-access-qjn7w") pod "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" (UID: "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639"). InnerVolumeSpecName "kube-api-access-qjn7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.880891 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" (UID: "f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.976511 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:03 crc kubenswrapper[4660]: I0129 12:30:03.976555 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjn7w\" (UniqueName: \"kubernetes.io/projected/f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639-kube-api-access-qjn7w\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:04 crc kubenswrapper[4660]: I0129 12:30:04.545188 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" event={"ID":"f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639","Type":"ContainerDied","Data":"bdbdec7ff6644a0885e938fccf46887a7dee979d69c96d19a3e1232ea0e3abdd"} Jan 29 12:30:04 crc kubenswrapper[4660]: I0129 12:30:04.545226 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdbdec7ff6644a0885e938fccf46887a7dee979d69c96d19a3e1232ea0e3abdd" Jan 29 12:30:04 crc kubenswrapper[4660]: I0129 12:30:04.545280 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-9z5k7" Jan 29 12:30:56 crc kubenswrapper[4660]: I0129 12:30:56.269380 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:30:56 crc kubenswrapper[4660]: I0129 12:30:56.270034 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.304043 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:13 crc kubenswrapper[4660]: E0129 12:31:13.304783 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" containerName="collect-profiles" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.304799 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" containerName="collect-profiles" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.304975 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79c9ab7-8b2a-47d4-a8d3-1e3d3f14a639" containerName="collect-profiles" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.306431 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.365858 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.366039 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtv2\" (UniqueName: \"kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.366075 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.375995 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.467943 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.468030 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtv2\" (UniqueName: \"kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.468059 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.468710 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.468793 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.488582 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtv2\" (UniqueName: \"kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2\") pod \"community-operators-z7xxg\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:13 crc kubenswrapper[4660]: I0129 12:31:13.645420 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:14 crc kubenswrapper[4660]: I0129 12:31:14.045023 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:15 crc kubenswrapper[4660]: I0129 12:31:15.031572 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerID="169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4" exitCode=0 Jan 29 12:31:15 crc kubenswrapper[4660]: I0129 12:31:15.031645 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerDied","Data":"169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4"} Jan 29 12:31:15 crc kubenswrapper[4660]: I0129 12:31:15.031913 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerStarted","Data":"fbe40d897fc8d5c6f7e0da47c0be35c9ee7c74d2cdd5e8e48cd76205c4607248"} Jan 29 12:31:15 crc kubenswrapper[4660]: I0129 12:31:15.033930 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:31:16 crc kubenswrapper[4660]: I0129 12:31:16.038720 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerID="1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21" exitCode=0 Jan 29 12:31:16 crc kubenswrapper[4660]: I0129 12:31:16.038896 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerDied","Data":"1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21"} Jan 29 12:31:17 crc kubenswrapper[4660]: I0129 12:31:17.050238 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerStarted","Data":"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a"} Jan 29 12:31:17 crc kubenswrapper[4660]: I0129 12:31:17.066994 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z7xxg" podStartSLOduration=2.592580393 podStartE2EDuration="4.06697799s" podCreationTimestamp="2026-01-29 12:31:13 +0000 UTC" firstStartedPulling="2026-01-29 12:31:15.033519931 +0000 UTC m=+1512.256462073" lastFinishedPulling="2026-01-29 12:31:16.507917538 +0000 UTC m=+1513.730859670" observedRunningTime="2026-01-29 12:31:17.06490951 +0000 UTC m=+1514.287851662" watchObservedRunningTime="2026-01-29 12:31:17.06697799 +0000 UTC m=+1514.289920112" Jan 29 12:31:23 crc kubenswrapper[4660]: I0129 12:31:23.646305 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:23 crc kubenswrapper[4660]: I0129 12:31:23.646915 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:23 crc kubenswrapper[4660]: I0129 12:31:23.687311 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:24 crc kubenswrapper[4660]: I0129 12:31:24.163569 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:24 crc kubenswrapper[4660]: I0129 12:31:24.207720 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.107191 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z7xxg" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="registry-server" containerID="cri-o://e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a" gracePeriod=2 Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.269771 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.269835 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.528672 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.657388 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content\") pod \"cf969a82-6f4e-405c-8ea1-fce38a37e191\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.658427 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjtv2\" (UniqueName: \"kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2\") pod \"cf969a82-6f4e-405c-8ea1-fce38a37e191\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.658637 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities\") pod \"cf969a82-6f4e-405c-8ea1-fce38a37e191\" (UID: \"cf969a82-6f4e-405c-8ea1-fce38a37e191\") " Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.659386 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities" (OuterVolumeSpecName: "utilities") pod "cf969a82-6f4e-405c-8ea1-fce38a37e191" (UID: "cf969a82-6f4e-405c-8ea1-fce38a37e191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.693942 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2" (OuterVolumeSpecName: "kube-api-access-jjtv2") pod "cf969a82-6f4e-405c-8ea1-fce38a37e191" (UID: "cf969a82-6f4e-405c-8ea1-fce38a37e191"). InnerVolumeSpecName "kube-api-access-jjtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.762887 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjtv2\" (UniqueName: \"kubernetes.io/projected/cf969a82-6f4e-405c-8ea1-fce38a37e191-kube-api-access-jjtv2\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:26 crc kubenswrapper[4660]: I0129 12:31:26.762926 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.115245 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerID="e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a" exitCode=0 Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.115303 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerDied","Data":"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a"} Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.115337 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z7xxg" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.115365 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z7xxg" event={"ID":"cf969a82-6f4e-405c-8ea1-fce38a37e191","Type":"ContainerDied","Data":"fbe40d897fc8d5c6f7e0da47c0be35c9ee7c74d2cdd5e8e48cd76205c4607248"} Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.115390 4660 scope.go:117] "RemoveContainer" containerID="e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.138679 4660 scope.go:117] "RemoveContainer" containerID="1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.155637 4660 scope.go:117] "RemoveContainer" containerID="169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.185084 4660 scope.go:117] "RemoveContainer" containerID="e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a" Jan 29 12:31:27 crc kubenswrapper[4660]: E0129 12:31:27.186589 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a\": container with ID starting with e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a not found: ID does not exist" containerID="e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.186739 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a"} err="failed to get container status \"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a\": rpc error: code = NotFound desc = could not find container \"e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a\": container with ID starting with e4e0b50081482a6a68f4d4b1c51827973c788eda96d81eb1a46d19937837bd0a not found: ID does not exist" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.186845 4660 scope.go:117] "RemoveContainer" containerID="1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21" Jan 29 12:31:27 crc kubenswrapper[4660]: E0129 12:31:27.187871 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21\": container with ID starting with 1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21 not found: ID does not exist" containerID="1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.188042 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21"} err="failed to get container status \"1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21\": rpc error: code = NotFound desc = could not find container \"1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21\": container with ID starting with 1408e31317271f7c4b0861d8851504bc869e09e2bc8dc70eef3f279d854c7d21 not found: ID does not exist" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.188146 4660 scope.go:117] "RemoveContainer" containerID="169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4" Jan 29 12:31:27 crc kubenswrapper[4660]: E0129 12:31:27.189391 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4\": container with ID starting with 169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4 not found: ID does not exist" containerID="169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4" Jan 29 12:31:27 crc kubenswrapper[4660]: I0129 12:31:27.189444 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4"} err="failed to get container status \"169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4\": rpc error: code = NotFound desc = could not find container \"169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4\": container with ID starting with 169cea05d6b78173fc9fe0b594c9a6489fcbd6d40806be546a1e863f906300b4 not found: ID does not exist" Jan 29 12:31:28 crc kubenswrapper[4660]: I0129 12:31:28.058905 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf969a82-6f4e-405c-8ea1-fce38a37e191" (UID: "cf969a82-6f4e-405c-8ea1-fce38a37e191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:28 crc kubenswrapper[4660]: I0129 12:31:28.081841 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf969a82-6f4e-405c-8ea1-fce38a37e191-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:28 crc kubenswrapper[4660]: I0129 12:31:28.343809 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:28 crc kubenswrapper[4660]: I0129 12:31:28.349904 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z7xxg"] Jan 29 12:31:29 crc kubenswrapper[4660]: I0129 12:31:29.478657 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" path="/var/lib/kubelet/pods/cf969a82-6f4e-405c-8ea1-fce38a37e191/volumes" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.008883 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:31:51 crc kubenswrapper[4660]: E0129 12:31:51.009819 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="extract-utilities" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.009837 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="extract-utilities" Jan 29 12:31:51 crc kubenswrapper[4660]: E0129 12:31:51.009853 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="extract-content" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.009860 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="extract-content" Jan 29 12:31:51 crc kubenswrapper[4660]: E0129 12:31:51.009876 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="registry-server" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.009885 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="registry-server" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.010105 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf969a82-6f4e-405c-8ea1-fce38a37e191" containerName="registry-server" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.011304 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.031920 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.103411 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7hm\" (UniqueName: \"kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.103502 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.103551 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.205218 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.205590 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk7hm\" (UniqueName: \"kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.205862 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.206071 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.206130 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.228878 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk7hm\" (UniqueName: \"kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm\") pod \"redhat-operators-ts8d7\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.380288 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:31:51 crc kubenswrapper[4660]: I0129 12:31:51.847287 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:31:52 crc kubenswrapper[4660]: I0129 12:31:52.341094 4660 generic.go:334] "Generic (PLEG): container finished" podID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerID="fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173" exitCode=0 Jan 29 12:31:52 crc kubenswrapper[4660]: I0129 12:31:52.341172 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerDied","Data":"fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173"} Jan 29 12:31:52 crc kubenswrapper[4660]: I0129 12:31:52.341244 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerStarted","Data":"2f7eb019f2eb89ab864878e9d0329eb1993b9bc57bb49f3b18775ee3f20946dc"} Jan 29 12:31:53 crc kubenswrapper[4660]: I0129 12:31:53.348347 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerStarted","Data":"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4"} Jan 29 12:31:54 crc kubenswrapper[4660]: I0129 12:31:54.358194 4660 generic.go:334] "Generic (PLEG): container finished" podID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerID="c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4" exitCode=0 Jan 29 12:31:54 crc kubenswrapper[4660]: I0129 12:31:54.359035 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerDied","Data":"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4"} Jan 29 12:31:55 crc kubenswrapper[4660]: I0129 12:31:55.369655 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerStarted","Data":"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca"} Jan 29 12:31:55 crc kubenswrapper[4660]: I0129 12:31:55.396871 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ts8d7" podStartSLOduration=2.9732419869999998 podStartE2EDuration="5.396856933s" podCreationTimestamp="2026-01-29 12:31:50 +0000 UTC" firstStartedPulling="2026-01-29 12:31:52.34305064 +0000 UTC m=+1549.565992772" lastFinishedPulling="2026-01-29 12:31:54.766665586 +0000 UTC m=+1551.989607718" observedRunningTime="2026-01-29 12:31:55.395220646 +0000 UTC m=+1552.618162798" watchObservedRunningTime="2026-01-29 12:31:55.396856933 +0000 UTC m=+1552.619799065" Jan 29 12:31:56 crc kubenswrapper[4660]: I0129 12:31:56.269136 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:31:56 crc kubenswrapper[4660]: I0129 12:31:56.269198 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:31:56 crc kubenswrapper[4660]: I0129 12:31:56.269240 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:31:56 crc kubenswrapper[4660]: I0129 12:31:56.269818 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:31:56 crc kubenswrapper[4660]: I0129 12:31:56.269874 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" gracePeriod=600 Jan 29 12:31:56 crc kubenswrapper[4660]: E0129 12:31:56.401569 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:31:57 crc kubenswrapper[4660]: I0129 12:31:57.385220 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c"} Jan 29 12:31:57 crc kubenswrapper[4660]: I0129 12:31:57.385222 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" exitCode=0 Jan 29 12:31:57 crc kubenswrapper[4660]: I0129 12:31:57.385272 4660 scope.go:117] "RemoveContainer" containerID="d9ed707aaa9f2952cd7088a7a5f900080b30dba83908f0000c4e67ebbd66ab7a" Jan 29 12:31:57 crc kubenswrapper[4660]: I0129 12:31:57.387888 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:31:57 crc kubenswrapper[4660]: E0129 12:31:57.388399 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.590539 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.593461 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.602797 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.661553 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.661855 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p669\" (UniqueName: \"kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.662123 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.763179 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.763276 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p669\" (UniqueName: \"kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.763364 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.763719 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.763825 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.789962 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p669\" (UniqueName: \"kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669\") pod \"redhat-marketplace-zlgzh\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:00 crc kubenswrapper[4660]: I0129 12:32:00.915954 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:01 crc kubenswrapper[4660]: I0129 12:32:01.383287 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:01 crc kubenswrapper[4660]: I0129 12:32:01.383557 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:01 crc kubenswrapper[4660]: I0129 12:32:01.431478 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:01 crc kubenswrapper[4660]: I0129 12:32:01.431786 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:01 crc kubenswrapper[4660]: W0129 12:32:01.435491 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40242826_0ea5_48c4_b873_c5333f5c89b2.slice/crio-ccc8dadc1e3852807f3d13054a19cc0eacd8fed8c14ed024f6d3e5cffe37348b WatchSource:0}: Error finding container ccc8dadc1e3852807f3d13054a19cc0eacd8fed8c14ed024f6d3e5cffe37348b: Status 404 returned error can't find the container with id ccc8dadc1e3852807f3d13054a19cc0eacd8fed8c14ed024f6d3e5cffe37348b Jan 29 12:32:01 crc kubenswrapper[4660]: I0129 12:32:01.499354 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:02 crc kubenswrapper[4660]: I0129 12:32:02.417780 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerStarted","Data":"ccc8dadc1e3852807f3d13054a19cc0eacd8fed8c14ed024f6d3e5cffe37348b"} Jan 29 12:32:03 crc kubenswrapper[4660]: I0129 12:32:03.426683 4660 generic.go:334] "Generic (PLEG): container finished" podID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerID="fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf" exitCode=0 Jan 29 12:32:03 crc kubenswrapper[4660]: I0129 12:32:03.426918 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerDied","Data":"fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf"} Jan 29 12:32:03 crc kubenswrapper[4660]: I0129 12:32:03.766479 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:32:03 crc kubenswrapper[4660]: I0129 12:32:03.766731 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ts8d7" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="registry-server" containerID="cri-o://8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca" gracePeriod=2 Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.219962 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.308885 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk7hm\" (UniqueName: \"kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm\") pod \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.309211 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content\") pod \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.309391 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities\") pod \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\" (UID: \"5ba46a0a-3db1-4b07-82ee-5a352ea166c0\") " Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.310440 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities" (OuterVolumeSpecName: "utilities") pod "5ba46a0a-3db1-4b07-82ee-5a352ea166c0" (UID: "5ba46a0a-3db1-4b07-82ee-5a352ea166c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.330186 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm" (OuterVolumeSpecName: "kube-api-access-dk7hm") pod "5ba46a0a-3db1-4b07-82ee-5a352ea166c0" (UID: "5ba46a0a-3db1-4b07-82ee-5a352ea166c0"). InnerVolumeSpecName "kube-api-access-dk7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.410565 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.410817 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk7hm\" (UniqueName: \"kubernetes.io/projected/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-kube-api-access-dk7hm\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.440310 4660 generic.go:334] "Generic (PLEG): container finished" podID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerID="8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca" exitCode=0 Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.440381 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ts8d7" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.440394 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerDied","Data":"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca"} Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.440424 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ts8d7" event={"ID":"5ba46a0a-3db1-4b07-82ee-5a352ea166c0","Type":"ContainerDied","Data":"2f7eb019f2eb89ab864878e9d0329eb1993b9bc57bb49f3b18775ee3f20946dc"} Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.440441 4660 scope.go:117] "RemoveContainer" containerID="8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.443605 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ba46a0a-3db1-4b07-82ee-5a352ea166c0" (UID: "5ba46a0a-3db1-4b07-82ee-5a352ea166c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.445389 4660 generic.go:334] "Generic (PLEG): container finished" podID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerID="455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d" exitCode=0 Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.445459 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerDied","Data":"455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d"} Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.476406 4660 scope.go:117] "RemoveContainer" containerID="c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.502477 4660 scope.go:117] "RemoveContainer" containerID="fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.511768 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba46a0a-3db1-4b07-82ee-5a352ea166c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.519809 4660 scope.go:117] "RemoveContainer" containerID="8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca" Jan 29 12:32:04 crc kubenswrapper[4660]: E0129 12:32:04.520322 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca\": container with ID starting with 8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca not found: ID does not exist" containerID="8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.520348 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca"} err="failed to get container status \"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca\": rpc error: code = NotFound desc = could not find container \"8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca\": container with ID starting with 8378ac7e08a4e07581123fd25eb5b4f05863f98ffaab79f3f16b5cefc4f234ca not found: ID does not exist" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.520368 4660 scope.go:117] "RemoveContainer" containerID="c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4" Jan 29 12:32:04 crc kubenswrapper[4660]: E0129 12:32:04.520625 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4\": container with ID starting with c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4 not found: ID does not exist" containerID="c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.520641 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4"} err="failed to get container status \"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4\": rpc error: code = NotFound desc = could not find container \"c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4\": container with ID starting with c64be71758f545ca357782e5bdf9b221e96ed51933eaf6bfc758eecd3fa7f6c4 not found: ID does not exist" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.520656 4660 scope.go:117] "RemoveContainer" containerID="fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173" Jan 29 12:32:04 crc kubenswrapper[4660]: E0129 12:32:04.520870 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173\": container with ID starting with fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173 not found: ID does not exist" containerID="fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.520894 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173"} err="failed to get container status \"fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173\": rpc error: code = NotFound desc = could not find container \"fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173\": container with ID starting with fd17737b88fd3a477eaabdf8fee46f53db6adb5a52576a5acab2452751c43173 not found: ID does not exist" Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.789779 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:32:04 crc kubenswrapper[4660]: I0129 12:32:04.797023 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ts8d7"] Jan 29 12:32:05 crc kubenswrapper[4660]: I0129 12:32:05.452773 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerStarted","Data":"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8"} Jan 29 12:32:05 crc kubenswrapper[4660]: I0129 12:32:05.469907 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zlgzh" podStartSLOduration=4.038260269 podStartE2EDuration="5.469888078s" podCreationTimestamp="2026-01-29 12:32:00 +0000 UTC" firstStartedPulling="2026-01-29 12:32:03.428120429 +0000 UTC m=+1560.651062601" lastFinishedPulling="2026-01-29 12:32:04.859748288 +0000 UTC m=+1562.082690410" observedRunningTime="2026-01-29 12:32:05.467345905 +0000 UTC m=+1562.690288037" watchObservedRunningTime="2026-01-29 12:32:05.469888078 +0000 UTC m=+1562.692830210" Jan 29 12:32:05 crc kubenswrapper[4660]: I0129 12:32:05.479074 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" path="/var/lib/kubelet/pods/5ba46a0a-3db1-4b07-82ee-5a352ea166c0/volumes" Jan 29 12:32:10 crc kubenswrapper[4660]: I0129 12:32:10.916969 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:10 crc kubenswrapper[4660]: I0129 12:32:10.917022 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:10 crc kubenswrapper[4660]: I0129 12:32:10.957272 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:11 crc kubenswrapper[4660]: I0129 12:32:11.470153 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:32:11 crc kubenswrapper[4660]: E0129 12:32:11.470394 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:32:11 crc kubenswrapper[4660]: I0129 12:32:11.532670 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:11 crc kubenswrapper[4660]: I0129 12:32:11.579168 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:13 crc kubenswrapper[4660]: I0129 12:32:13.504029 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zlgzh" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="registry-server" containerID="cri-o://e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8" gracePeriod=2 Jan 29 12:32:13 crc kubenswrapper[4660]: I0129 12:32:13.901228 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.052754 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content\") pod \"40242826-0ea5-48c4-b873-c5333f5c89b2\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.052806 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p669\" (UniqueName: \"kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669\") pod \"40242826-0ea5-48c4-b873-c5333f5c89b2\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.052883 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities\") pod \"40242826-0ea5-48c4-b873-c5333f5c89b2\" (UID: \"40242826-0ea5-48c4-b873-c5333f5c89b2\") " Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.054024 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities" (OuterVolumeSpecName: "utilities") pod "40242826-0ea5-48c4-b873-c5333f5c89b2" (UID: "40242826-0ea5-48c4-b873-c5333f5c89b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.061886 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669" (OuterVolumeSpecName: "kube-api-access-4p669") pod "40242826-0ea5-48c4-b873-c5333f5c89b2" (UID: "40242826-0ea5-48c4-b873-c5333f5c89b2"). InnerVolumeSpecName "kube-api-access-4p669". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.078396 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40242826-0ea5-48c4-b873-c5333f5c89b2" (UID: "40242826-0ea5-48c4-b873-c5333f5c89b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.154718 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.154747 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40242826-0ea5-48c4-b873-c5333f5c89b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.154771 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p669\" (UniqueName: \"kubernetes.io/projected/40242826-0ea5-48c4-b873-c5333f5c89b2-kube-api-access-4p669\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.511071 4660 generic.go:334] "Generic (PLEG): container finished" podID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerID="e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8" exitCode=0 Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.511119 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerDied","Data":"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8"} Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.511126 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zlgzh" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.511148 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zlgzh" event={"ID":"40242826-0ea5-48c4-b873-c5333f5c89b2","Type":"ContainerDied","Data":"ccc8dadc1e3852807f3d13054a19cc0eacd8fed8c14ed024f6d3e5cffe37348b"} Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.511171 4660 scope.go:117] "RemoveContainer" containerID="e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.544663 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.551349 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zlgzh"] Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.556382 4660 scope.go:117] "RemoveContainer" containerID="455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.575306 4660 scope.go:117] "RemoveContainer" containerID="fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.600937 4660 scope.go:117] "RemoveContainer" containerID="e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8" Jan 29 12:32:14 crc kubenswrapper[4660]: E0129 12:32:14.601461 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8\": container with ID starting with e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8 not found: ID does not exist" containerID="e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.601558 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8"} err="failed to get container status \"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8\": rpc error: code = NotFound desc = could not find container \"e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8\": container with ID starting with e01a474b2acb02da1b0aae28a0c5fa2aedc28894b5376ffee54c942835b350e8 not found: ID does not exist" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.601603 4660 scope.go:117] "RemoveContainer" containerID="455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d" Jan 29 12:32:14 crc kubenswrapper[4660]: E0129 12:32:14.602126 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d\": container with ID starting with 455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d not found: ID does not exist" containerID="455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.602449 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d"} err="failed to get container status \"455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d\": rpc error: code = NotFound desc = could not find container \"455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d\": container with ID starting with 455c3b862a53ae5ba03ca729b57e14e93a0b922d13517c56056304206a708b8d not found: ID does not exist" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.602621 4660 scope.go:117] "RemoveContainer" containerID="fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf" Jan 29 12:32:14 crc kubenswrapper[4660]: E0129 12:32:14.604252 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf\": container with ID starting with fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf not found: ID does not exist" containerID="fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf" Jan 29 12:32:14 crc kubenswrapper[4660]: I0129 12:32:14.604289 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf"} err="failed to get container status \"fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf\": rpc error: code = NotFound desc = could not find container \"fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf\": container with ID starting with fbecb70967bb962e0dd5fa069a7425d9a5f2ac8c45ef798f8e1a2f655905ebcf not found: ID does not exist" Jan 29 12:32:15 crc kubenswrapper[4660]: I0129 12:32:15.496625 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" path="/var/lib/kubelet/pods/40242826-0ea5-48c4-b873-c5333f5c89b2/volumes" Jan 29 12:32:26 crc kubenswrapper[4660]: I0129 12:32:26.469381 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:32:26 crc kubenswrapper[4660]: E0129 12:32:26.470001 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:32:39 crc kubenswrapper[4660]: I0129 12:32:39.469442 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:32:39 crc kubenswrapper[4660]: E0129 12:32:39.470383 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:32:53 crc kubenswrapper[4660]: I0129 12:32:53.477144 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:32:53 crc kubenswrapper[4660]: E0129 12:32:53.480562 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:33:04 crc kubenswrapper[4660]: I0129 12:33:04.469461 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:33:04 crc kubenswrapper[4660]: E0129 12:33:04.470226 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:33:18 crc kubenswrapper[4660]: I0129 12:33:18.469367 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:33:18 crc kubenswrapper[4660]: E0129 12:33:18.470225 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:33:32 crc kubenswrapper[4660]: I0129 12:33:32.470824 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:33:32 crc kubenswrapper[4660]: E0129 12:33:32.471595 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:33:43 crc kubenswrapper[4660]: I0129 12:33:43.473387 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:33:43 crc kubenswrapper[4660]: E0129 12:33:43.474109 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:33:57 crc kubenswrapper[4660]: I0129 12:33:57.470310 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:33:57 crc kubenswrapper[4660]: E0129 12:33:57.471997 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:34:08 crc kubenswrapper[4660]: I0129 12:34:08.469539 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:34:08 crc kubenswrapper[4660]: E0129 12:34:08.470429 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:34:19 crc kubenswrapper[4660]: I0129 12:34:19.469641 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:34:19 crc kubenswrapper[4660]: E0129 12:34:19.470556 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:34:32 crc kubenswrapper[4660]: I0129 12:34:32.470503 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:34:32 crc kubenswrapper[4660]: E0129 12:34:32.471283 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:34:47 crc kubenswrapper[4660]: I0129 12:34:47.469828 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:34:47 crc kubenswrapper[4660]: E0129 12:34:47.470472 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:34:58 crc kubenswrapper[4660]: I0129 12:34:58.470445 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:34:58 crc kubenswrapper[4660]: E0129 12:34:58.471358 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:35:12 crc kubenswrapper[4660]: I0129 12:35:12.469131 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:35:12 crc kubenswrapper[4660]: E0129 12:35:12.469726 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:35:26 crc kubenswrapper[4660]: I0129 12:35:26.470306 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:35:26 crc kubenswrapper[4660]: E0129 12:35:26.470988 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:35:38 crc kubenswrapper[4660]: I0129 12:35:38.469572 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:35:38 crc kubenswrapper[4660]: E0129 12:35:38.470410 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:35:49 crc kubenswrapper[4660]: I0129 12:35:49.469667 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:35:49 crc kubenswrapper[4660]: E0129 12:35:49.470387 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:36:02 crc kubenswrapper[4660]: I0129 12:36:02.470532 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:36:02 crc kubenswrapper[4660]: E0129 12:36:02.471391 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:36:16 crc kubenswrapper[4660]: I0129 12:36:16.470147 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:36:16 crc kubenswrapper[4660]: E0129 12:36:16.470868 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:36:30 crc kubenswrapper[4660]: I0129 12:36:30.470279 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:36:30 crc kubenswrapper[4660]: E0129 12:36:30.471095 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.129143 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.129954 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="extract-utilities" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.129967 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="extract-utilities" Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.129976 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.129983 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.129991 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="extract-content" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.129997 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="extract-content" Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.130013 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="extract-content" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.130020 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="extract-content" Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.130034 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.130045 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: E0129 12:36:38.130056 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="extract-utilities" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.130064 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="extract-utilities" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.130217 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="40242826-0ea5-48c4-b873-c5333f5c89b2" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.130238 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba46a0a-3db1-4b07-82ee-5a352ea166c0" containerName="registry-server" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.131115 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.182680 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.312804 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.312952 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2znj\" (UniqueName: \"kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.313109 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.414424 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.414489 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2znj\" (UniqueName: \"kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.414546 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.415056 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.415101 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.448551 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2znj\" (UniqueName: \"kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj\") pod \"certified-operators-c4xw4\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:38 crc kubenswrapper[4660]: I0129 12:36:38.454102 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:39 crc kubenswrapper[4660]: I0129 12:36:39.078870 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:39 crc kubenswrapper[4660]: I0129 12:36:39.260860 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerStarted","Data":"81697fa941b03affa74b8e3d088e6257643ea17aa98c4bf9e32525c446dc705e"} Jan 29 12:36:40 crc kubenswrapper[4660]: I0129 12:36:40.269473 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerID="85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e" exitCode=0 Jan 29 12:36:40 crc kubenswrapper[4660]: I0129 12:36:40.269525 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerDied","Data":"85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e"} Jan 29 12:36:40 crc kubenswrapper[4660]: I0129 12:36:40.271468 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:36:41 crc kubenswrapper[4660]: I0129 12:36:41.276032 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerID="2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153" exitCode=0 Jan 29 12:36:41 crc kubenswrapper[4660]: I0129 12:36:41.276288 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerDied","Data":"2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153"} Jan 29 12:36:42 crc kubenswrapper[4660]: I0129 12:36:42.285473 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerStarted","Data":"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521"} Jan 29 12:36:42 crc kubenswrapper[4660]: I0129 12:36:42.313783 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4xw4" podStartSLOduration=2.534512427 podStartE2EDuration="4.313757426s" podCreationTimestamp="2026-01-29 12:36:38 +0000 UTC" firstStartedPulling="2026-01-29 12:36:40.271084743 +0000 UTC m=+1837.494026905" lastFinishedPulling="2026-01-29 12:36:42.050329782 +0000 UTC m=+1839.273271904" observedRunningTime="2026-01-29 12:36:42.308107783 +0000 UTC m=+1839.531049965" watchObservedRunningTime="2026-01-29 12:36:42.313757426 +0000 UTC m=+1839.536699558" Jan 29 12:36:44 crc kubenswrapper[4660]: I0129 12:36:44.470235 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:36:44 crc kubenswrapper[4660]: E0129 12:36:44.470501 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:36:48 crc kubenswrapper[4660]: I0129 12:36:48.457944 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:48 crc kubenswrapper[4660]: I0129 12:36:48.458334 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:48 crc kubenswrapper[4660]: I0129 12:36:48.516186 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:49 crc kubenswrapper[4660]: I0129 12:36:49.379352 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:49 crc kubenswrapper[4660]: I0129 12:36:49.425008 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.345202 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4xw4" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="registry-server" containerID="cri-o://5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521" gracePeriod=2 Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.708482 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.900370 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2znj\" (UniqueName: \"kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj\") pod \"cf21b219-1ad6-48f4-b50c-d77308147dd7\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.900436 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities\") pod \"cf21b219-1ad6-48f4-b50c-d77308147dd7\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.900460 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content\") pod \"cf21b219-1ad6-48f4-b50c-d77308147dd7\" (UID: \"cf21b219-1ad6-48f4-b50c-d77308147dd7\") " Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.901967 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities" (OuterVolumeSpecName: "utilities") pod "cf21b219-1ad6-48f4-b50c-d77308147dd7" (UID: "cf21b219-1ad6-48f4-b50c-d77308147dd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:36:51 crc kubenswrapper[4660]: I0129 12:36:51.912621 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj" (OuterVolumeSpecName: "kube-api-access-p2znj") pod "cf21b219-1ad6-48f4-b50c-d77308147dd7" (UID: "cf21b219-1ad6-48f4-b50c-d77308147dd7"). InnerVolumeSpecName "kube-api-access-p2znj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.001923 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2znj\" (UniqueName: \"kubernetes.io/projected/cf21b219-1ad6-48f4-b50c-d77308147dd7-kube-api-access-p2znj\") on node \"crc\" DevicePath \"\"" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.002251 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.352153 4660 generic.go:334] "Generic (PLEG): container finished" podID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerID="5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521" exitCode=0 Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.352196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerDied","Data":"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521"} Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.352231 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4xw4" event={"ID":"cf21b219-1ad6-48f4-b50c-d77308147dd7","Type":"ContainerDied","Data":"81697fa941b03affa74b8e3d088e6257643ea17aa98c4bf9e32525c446dc705e"} Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.352249 4660 scope.go:117] "RemoveContainer" containerID="5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.352907 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4xw4" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.371432 4660 scope.go:117] "RemoveContainer" containerID="2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.401165 4660 scope.go:117] "RemoveContainer" containerID="85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.415226 4660 scope.go:117] "RemoveContainer" containerID="5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521" Jan 29 12:36:52 crc kubenswrapper[4660]: E0129 12:36:52.416971 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521\": container with ID starting with 5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521 not found: ID does not exist" containerID="5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.417002 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521"} err="failed to get container status \"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521\": rpc error: code = NotFound desc = could not find container \"5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521\": container with ID starting with 5cd3e8bcbfd419884860eb622ebbf45c36d7146a67b5019a835415917508b521 not found: ID does not exist" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.417023 4660 scope.go:117] "RemoveContainer" containerID="2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153" Jan 29 12:36:52 crc kubenswrapper[4660]: E0129 12:36:52.417353 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153\": container with ID starting with 2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153 not found: ID does not exist" containerID="2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.417398 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153"} err="failed to get container status \"2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153\": rpc error: code = NotFound desc = could not find container \"2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153\": container with ID starting with 2a42f979e5febbd44b82332e1b6284f5c9a1a5fcf7c2bba0859a4dbf4a522153 not found: ID does not exist" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.417424 4660 scope.go:117] "RemoveContainer" containerID="85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e" Jan 29 12:36:52 crc kubenswrapper[4660]: E0129 12:36:52.417800 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e\": container with ID starting with 85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e not found: ID does not exist" containerID="85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.417823 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e"} err="failed to get container status \"85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e\": rpc error: code = NotFound desc = could not find container \"85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e\": container with ID starting with 85364aa604d9c573443385ebb22500a679fad8a63ff87b1d13c125d619e2b17e not found: ID does not exist" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.918010 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf21b219-1ad6-48f4-b50c-d77308147dd7" (UID: "cf21b219-1ad6-48f4-b50c-d77308147dd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.987484 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:52 crc kubenswrapper[4660]: I0129 12:36:52.993573 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4xw4"] Jan 29 12:36:53 crc kubenswrapper[4660]: I0129 12:36:53.015779 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf21b219-1ad6-48f4-b50c-d77308147dd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:36:53 crc kubenswrapper[4660]: I0129 12:36:53.487605 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" path="/var/lib/kubelet/pods/cf21b219-1ad6-48f4-b50c-d77308147dd7/volumes" Jan 29 12:36:56 crc kubenswrapper[4660]: I0129 12:36:56.469571 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:36:57 crc kubenswrapper[4660]: I0129 12:36:57.394306 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83"} Jan 29 12:39:26 crc kubenswrapper[4660]: I0129 12:39:26.268960 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:39:26 crc kubenswrapper[4660]: I0129 12:39:26.269439 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:39:56 crc kubenswrapper[4660]: I0129 12:39:56.269561 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:39:56 crc kubenswrapper[4660]: I0129 12:39:56.270774 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.269395 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.269969 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.270029 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.270781 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.270856 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83" gracePeriod=600 Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.846512 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83" exitCode=0 Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.846835 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83"} Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.846862 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae"} Jan 29 12:40:26 crc kubenswrapper[4660]: I0129 12:40:26.846877 4660 scope.go:117] "RemoveContainer" containerID="8bdf6f4b481123adba06cacbb20ebab4d4bc937ebe7d6ad208694ddf959e592c" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.434043 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:41:40 crc kubenswrapper[4660]: E0129 12:41:40.434955 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="registry-server" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.434971 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="registry-server" Jan 29 12:41:40 crc kubenswrapper[4660]: E0129 12:41:40.434986 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="extract-content" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.434993 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="extract-content" Jan 29 12:41:40 crc kubenswrapper[4660]: E0129 12:41:40.435018 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="extract-utilities" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.435027 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="extract-utilities" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.435183 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf21b219-1ad6-48f4-b50c-d77308147dd7" containerName="registry-server" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.436302 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.453276 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.453586 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t7p8\" (UniqueName: \"kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.453612 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.509405 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.558366 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t7p8\" (UniqueName: \"kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.558410 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.558481 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.558915 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.559905 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.588018 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t7p8\" (UniqueName: \"kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8\") pod \"community-operators-chtjg\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:40 crc kubenswrapper[4660]: I0129 12:41:40.756294 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:41 crc kubenswrapper[4660]: I0129 12:41:41.251147 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:41:41 crc kubenswrapper[4660]: I0129 12:41:41.354547 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerStarted","Data":"4f526ed86526c9a351db0251fbd3081aa670e262bf2b67c96e769d3e6cd067eb"} Jan 29 12:41:42 crc kubenswrapper[4660]: I0129 12:41:42.371919 4660 generic.go:334] "Generic (PLEG): container finished" podID="ee124d06-3766-45c4-a525-7036ee47a227" containerID="95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682" exitCode=0 Jan 29 12:41:42 crc kubenswrapper[4660]: I0129 12:41:42.372116 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerDied","Data":"95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682"} Jan 29 12:41:42 crc kubenswrapper[4660]: I0129 12:41:42.375679 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:41:44 crc kubenswrapper[4660]: I0129 12:41:44.390669 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerStarted","Data":"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2"} Jan 29 12:41:45 crc kubenswrapper[4660]: I0129 12:41:45.399260 4660 generic.go:334] "Generic (PLEG): container finished" podID="ee124d06-3766-45c4-a525-7036ee47a227" containerID="35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2" exitCode=0 Jan 29 12:41:45 crc kubenswrapper[4660]: I0129 12:41:45.399307 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerDied","Data":"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2"} Jan 29 12:41:47 crc kubenswrapper[4660]: I0129 12:41:47.421009 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerStarted","Data":"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1"} Jan 29 12:41:47 crc kubenswrapper[4660]: I0129 12:41:47.441784 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chtjg" podStartSLOduration=2.686984631 podStartE2EDuration="7.441769198s" podCreationTimestamp="2026-01-29 12:41:40 +0000 UTC" firstStartedPulling="2026-01-29 12:41:42.375058737 +0000 UTC m=+2139.598000909" lastFinishedPulling="2026-01-29 12:41:47.129843344 +0000 UTC m=+2144.352785476" observedRunningTime="2026-01-29 12:41:47.437762773 +0000 UTC m=+2144.660704895" watchObservedRunningTime="2026-01-29 12:41:47.441769198 +0000 UTC m=+2144.664711320" Jan 29 12:41:50 crc kubenswrapper[4660]: I0129 12:41:50.757229 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:50 crc kubenswrapper[4660]: I0129 12:41:50.757549 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:50 crc kubenswrapper[4660]: I0129 12:41:50.825388 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.372461 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.379833 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.429072 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.433811 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.433865 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.434170 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkjp\" (UniqueName: \"kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.535448 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkkjp\" (UniqueName: \"kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.535521 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.535544 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.535978 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.536217 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.556582 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkkjp\" (UniqueName: \"kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp\") pod \"redhat-operators-rnh88\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:51 crc kubenswrapper[4660]: I0129 12:41:51.739063 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:41:52 crc kubenswrapper[4660]: I0129 12:41:52.184172 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:41:52 crc kubenswrapper[4660]: I0129 12:41:52.454984 4660 generic.go:334] "Generic (PLEG): container finished" podID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerID="de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e" exitCode=0 Jan 29 12:41:52 crc kubenswrapper[4660]: I0129 12:41:52.455069 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerDied","Data":"de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e"} Jan 29 12:41:52 crc kubenswrapper[4660]: I0129 12:41:52.455317 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerStarted","Data":"de24c93d8c109ca55155cf3628267588cc2e23a7a4af1ab16027d34dd1a394e1"} Jan 29 12:41:53 crc kubenswrapper[4660]: I0129 12:41:53.463650 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerStarted","Data":"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b"} Jan 29 12:41:54 crc kubenswrapper[4660]: I0129 12:41:54.472703 4660 generic.go:334] "Generic (PLEG): container finished" podID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerID="d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b" exitCode=0 Jan 29 12:41:54 crc kubenswrapper[4660]: I0129 12:41:54.472745 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerDied","Data":"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b"} Jan 29 12:41:55 crc kubenswrapper[4660]: I0129 12:41:55.482775 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerStarted","Data":"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d"} Jan 29 12:41:55 crc kubenswrapper[4660]: I0129 12:41:55.505630 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rnh88" podStartSLOduration=2.055920218 podStartE2EDuration="4.50561062s" podCreationTimestamp="2026-01-29 12:41:51 +0000 UTC" firstStartedPulling="2026-01-29 12:41:52.456625064 +0000 UTC m=+2149.679567196" lastFinishedPulling="2026-01-29 12:41:54.906315466 +0000 UTC m=+2152.129257598" observedRunningTime="2026-01-29 12:41:55.500068711 +0000 UTC m=+2152.723010843" watchObservedRunningTime="2026-01-29 12:41:55.50561062 +0000 UTC m=+2152.728552752" Jan 29 12:42:00 crc kubenswrapper[4660]: I0129 12:42:00.821680 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:42:00 crc kubenswrapper[4660]: I0129 12:42:00.880709 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:42:01 crc kubenswrapper[4660]: I0129 12:42:01.537228 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chtjg" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="registry-server" containerID="cri-o://3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1" gracePeriod=2 Jan 29 12:42:01 crc kubenswrapper[4660]: I0129 12:42:01.740639 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:01 crc kubenswrapper[4660]: I0129 12:42:01.740710 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:01 crc kubenswrapper[4660]: I0129 12:42:01.811209 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.029644 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.199159 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t7p8\" (UniqueName: \"kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8\") pod \"ee124d06-3766-45c4-a525-7036ee47a227\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.199243 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content\") pod \"ee124d06-3766-45c4-a525-7036ee47a227\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.199270 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities\") pod \"ee124d06-3766-45c4-a525-7036ee47a227\" (UID: \"ee124d06-3766-45c4-a525-7036ee47a227\") " Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.199909 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities" (OuterVolumeSpecName: "utilities") pod "ee124d06-3766-45c4-a525-7036ee47a227" (UID: "ee124d06-3766-45c4-a525-7036ee47a227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.210262 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8" (OuterVolumeSpecName: "kube-api-access-6t7p8") pod "ee124d06-3766-45c4-a525-7036ee47a227" (UID: "ee124d06-3766-45c4-a525-7036ee47a227"). InnerVolumeSpecName "kube-api-access-6t7p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.244184 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee124d06-3766-45c4-a525-7036ee47a227" (UID: "ee124d06-3766-45c4-a525-7036ee47a227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.301140 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.301169 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee124d06-3766-45c4-a525-7036ee47a227-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.301179 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t7p8\" (UniqueName: \"kubernetes.io/projected/ee124d06-3766-45c4-a525-7036ee47a227-kube-api-access-6t7p8\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.562484 4660 generic.go:334] "Generic (PLEG): container finished" podID="ee124d06-3766-45c4-a525-7036ee47a227" containerID="3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1" exitCode=0 Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.563260 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chtjg" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.563953 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerDied","Data":"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1"} Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.564035 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chtjg" event={"ID":"ee124d06-3766-45c4-a525-7036ee47a227","Type":"ContainerDied","Data":"4f526ed86526c9a351db0251fbd3081aa670e262bf2b67c96e769d3e6cd067eb"} Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.564075 4660 scope.go:117] "RemoveContainer" containerID="3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.585932 4660 scope.go:117] "RemoveContainer" containerID="35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.598726 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.609379 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chtjg"] Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.623897 4660 scope.go:117] "RemoveContainer" containerID="95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.631481 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.644372 4660 scope.go:117] "RemoveContainer" containerID="3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1" Jan 29 12:42:02 crc kubenswrapper[4660]: E0129 12:42:02.648448 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1\": container with ID starting with 3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1 not found: ID does not exist" containerID="3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.648564 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1"} err="failed to get container status \"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1\": rpc error: code = NotFound desc = could not find container \"3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1\": container with ID starting with 3ed6ded7b37802143cca4741e5dd44fd858ce2ec385ef5ce0f72f1515e695bc1 not found: ID does not exist" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.648669 4660 scope.go:117] "RemoveContainer" containerID="35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2" Jan 29 12:42:02 crc kubenswrapper[4660]: E0129 12:42:02.649167 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2\": container with ID starting with 35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2 not found: ID does not exist" containerID="35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.649274 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2"} err="failed to get container status \"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2\": rpc error: code = NotFound desc = could not find container \"35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2\": container with ID starting with 35bf8b9c301e2d3b0b3b7226ce54a4d0a48887aad0e3b8717de70977a051b7f2 not found: ID does not exist" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.649375 4660 scope.go:117] "RemoveContainer" containerID="95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682" Jan 29 12:42:02 crc kubenswrapper[4660]: E0129 12:42:02.649727 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682\": container with ID starting with 95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682 not found: ID does not exist" containerID="95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682" Jan 29 12:42:02 crc kubenswrapper[4660]: I0129 12:42:02.649835 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682"} err="failed to get container status \"95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682\": rpc error: code = NotFound desc = could not find container \"95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682\": container with ID starting with 95cee4d10e3da53b9bb73c7d5295bb5243e9b2f6f4b879a31b69d836cb94c682 not found: ID does not exist" Jan 29 12:42:03 crc kubenswrapper[4660]: I0129 12:42:03.481977 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee124d06-3766-45c4-a525-7036ee47a227" path="/var/lib/kubelet/pods/ee124d06-3766-45c4-a525-7036ee47a227/volumes" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.059382 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.059930 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rnh88" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="registry-server" containerID="cri-o://2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d" gracePeriod=2 Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.461493 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.588561 4660 generic.go:334] "Generic (PLEG): container finished" podID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerID="2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d" exitCode=0 Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.588608 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerDied","Data":"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d"} Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.588638 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnh88" event={"ID":"bdc953a9-61b3-47ae-8def-88406ef531b5","Type":"ContainerDied","Data":"de24c93d8c109ca55155cf3628267588cc2e23a7a4af1ab16027d34dd1a394e1"} Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.588658 4660 scope.go:117] "RemoveContainer" containerID="2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.588804 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnh88" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.608948 4660 scope.go:117] "RemoveContainer" containerID="d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.640034 4660 scope.go:117] "RemoveContainer" containerID="de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.650952 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities\") pod \"bdc953a9-61b3-47ae-8def-88406ef531b5\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.651007 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkkjp\" (UniqueName: \"kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp\") pod \"bdc953a9-61b3-47ae-8def-88406ef531b5\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.651045 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content\") pod \"bdc953a9-61b3-47ae-8def-88406ef531b5\" (UID: \"bdc953a9-61b3-47ae-8def-88406ef531b5\") " Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.651884 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities" (OuterVolumeSpecName: "utilities") pod "bdc953a9-61b3-47ae-8def-88406ef531b5" (UID: "bdc953a9-61b3-47ae-8def-88406ef531b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.661360 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp" (OuterVolumeSpecName: "kube-api-access-tkkjp") pod "bdc953a9-61b3-47ae-8def-88406ef531b5" (UID: "bdc953a9-61b3-47ae-8def-88406ef531b5"). InnerVolumeSpecName "kube-api-access-tkkjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.662730 4660 scope.go:117] "RemoveContainer" containerID="2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d" Jan 29 12:42:05 crc kubenswrapper[4660]: E0129 12:42:05.663117 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d\": container with ID starting with 2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d not found: ID does not exist" containerID="2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.663169 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d"} err="failed to get container status \"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d\": rpc error: code = NotFound desc = could not find container \"2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d\": container with ID starting with 2ca8c620683868c655fde1d048a9f7163691cf38fd109a0a9a89a1ed8054ef8d not found: ID does not exist" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.663206 4660 scope.go:117] "RemoveContainer" containerID="d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b" Jan 29 12:42:05 crc kubenswrapper[4660]: E0129 12:42:05.663523 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b\": container with ID starting with d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b not found: ID does not exist" containerID="d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.663547 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b"} err="failed to get container status \"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b\": rpc error: code = NotFound desc = could not find container \"d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b\": container with ID starting with d76a86b39d991d64757f15934f3bb92906ca2c6e6bf63d0c5597600171a9a05b not found: ID does not exist" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.663566 4660 scope.go:117] "RemoveContainer" containerID="de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e" Jan 29 12:42:05 crc kubenswrapper[4660]: E0129 12:42:05.663986 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e\": container with ID starting with de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e not found: ID does not exist" containerID="de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.664018 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e"} err="failed to get container status \"de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e\": rpc error: code = NotFound desc = could not find container \"de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e\": container with ID starting with de51cf92795471739bb8027f661cf7951953c436faf590c94301639f140f666e not found: ID does not exist" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.753326 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.753357 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkkjp\" (UniqueName: \"kubernetes.io/projected/bdc953a9-61b3-47ae-8def-88406ef531b5-kube-api-access-tkkjp\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.773608 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc953a9-61b3-47ae-8def-88406ef531b5" (UID: "bdc953a9-61b3-47ae-8def-88406ef531b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.855091 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc953a9-61b3-47ae-8def-88406ef531b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.925053 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:42:05 crc kubenswrapper[4660]: I0129 12:42:05.931654 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rnh88"] Jan 29 12:42:07 crc kubenswrapper[4660]: I0129 12:42:07.484742 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" path="/var/lib/kubelet/pods/bdc953a9-61b3-47ae-8def-88406ef531b5/volumes" Jan 29 12:42:26 crc kubenswrapper[4660]: I0129 12:42:26.269990 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:42:26 crc kubenswrapper[4660]: I0129 12:42:26.270494 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:42:56 crc kubenswrapper[4660]: I0129 12:42:56.268953 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:42:56 crc kubenswrapper[4660]: I0129 12:42:56.269613 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.960305 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961191 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961206 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961221 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="extract-utilities" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961230 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="extract-utilities" Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961246 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="extract-content" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961253 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="extract-content" Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961264 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961270 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961284 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="extract-utilities" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961291 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="extract-utilities" Jan 29 12:43:14 crc kubenswrapper[4660]: E0129 12:43:14.961305 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="extract-content" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961314 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="extract-content" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961447 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee124d06-3766-45c4-a525-7036ee47a227" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.961467 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc953a9-61b3-47ae-8def-88406ef531b5" containerName="registry-server" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.962578 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:14 crc kubenswrapper[4660]: I0129 12:43:14.988365 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.101606 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqf25\" (UniqueName: \"kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.101996 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.102152 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.203428 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.203500 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.203532 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqf25\" (UniqueName: \"kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.204179 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.204297 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.227641 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqf25\" (UniqueName: \"kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25\") pod \"redhat-marketplace-jq8j5\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.289917 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:15 crc kubenswrapper[4660]: I0129 12:43:15.821184 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:15 crc kubenswrapper[4660]: W0129 12:43:15.827947 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6db0e66_e6a5_4aaf_8949_01985b9b9cad.slice/crio-00ef3363d432957157aa45524a48395f928f5470a504b435d7aa9ab59d6dab02 WatchSource:0}: Error finding container 00ef3363d432957157aa45524a48395f928f5470a504b435d7aa9ab59d6dab02: Status 404 returned error can't find the container with id 00ef3363d432957157aa45524a48395f928f5470a504b435d7aa9ab59d6dab02 Jan 29 12:43:16 crc kubenswrapper[4660]: I0129 12:43:16.381425 4660 generic.go:334] "Generic (PLEG): container finished" podID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerID="ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c" exitCode=0 Jan 29 12:43:16 crc kubenswrapper[4660]: I0129 12:43:16.381665 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerDied","Data":"ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c"} Jan 29 12:43:16 crc kubenswrapper[4660]: I0129 12:43:16.381806 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerStarted","Data":"00ef3363d432957157aa45524a48395f928f5470a504b435d7aa9ab59d6dab02"} Jan 29 12:43:18 crc kubenswrapper[4660]: I0129 12:43:18.396360 4660 generic.go:334] "Generic (PLEG): container finished" podID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerID="f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9" exitCode=0 Jan 29 12:43:18 crc kubenswrapper[4660]: I0129 12:43:18.396406 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerDied","Data":"f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9"} Jan 29 12:43:19 crc kubenswrapper[4660]: I0129 12:43:19.411050 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerStarted","Data":"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb"} Jan 29 12:43:19 crc kubenswrapper[4660]: I0129 12:43:19.432355 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jq8j5" podStartSLOduration=2.936363846 podStartE2EDuration="5.432334959s" podCreationTimestamp="2026-01-29 12:43:14 +0000 UTC" firstStartedPulling="2026-01-29 12:43:16.382790197 +0000 UTC m=+2233.605732329" lastFinishedPulling="2026-01-29 12:43:18.87876128 +0000 UTC m=+2236.101703442" observedRunningTime="2026-01-29 12:43:19.425876875 +0000 UTC m=+2236.648819007" watchObservedRunningTime="2026-01-29 12:43:19.432334959 +0000 UTC m=+2236.655277091" Jan 29 12:43:25 crc kubenswrapper[4660]: I0129 12:43:25.290066 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:25 crc kubenswrapper[4660]: I0129 12:43:25.290539 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:25 crc kubenswrapper[4660]: I0129 12:43:25.358255 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:25 crc kubenswrapper[4660]: I0129 12:43:25.522624 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:25 crc kubenswrapper[4660]: I0129 12:43:25.599586 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:26 crc kubenswrapper[4660]: I0129 12:43:26.269795 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:43:26 crc kubenswrapper[4660]: I0129 12:43:26.270184 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:43:26 crc kubenswrapper[4660]: I0129 12:43:26.270304 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:43:26 crc kubenswrapper[4660]: I0129 12:43:26.271010 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:43:26 crc kubenswrapper[4660]: I0129 12:43:26.271165 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" gracePeriod=600 Jan 29 12:43:26 crc kubenswrapper[4660]: E0129 12:43:26.898628 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.489836 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" exitCode=0 Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.489911 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae"} Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.489957 4660 scope.go:117] "RemoveContainer" containerID="c39de0bdc07a6f37674e61b9ccddab7ebcdf2567c3486a3bc9f5cf5b7dd95f83" Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.490015 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jq8j5" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="registry-server" containerID="cri-o://7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb" gracePeriod=2 Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.490829 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:43:27 crc kubenswrapper[4660]: E0129 12:43:27.491370 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:43:27 crc kubenswrapper[4660]: I0129 12:43:27.865908 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.004433 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqf25\" (UniqueName: \"kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25\") pod \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.005003 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content\") pod \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.005100 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities\") pod \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\" (UID: \"c6db0e66-e6a5-4aaf-8949-01985b9b9cad\") " Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.005838 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities" (OuterVolumeSpecName: "utilities") pod "c6db0e66-e6a5-4aaf-8949-01985b9b9cad" (UID: "c6db0e66-e6a5-4aaf-8949-01985b9b9cad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.009747 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25" (OuterVolumeSpecName: "kube-api-access-qqf25") pod "c6db0e66-e6a5-4aaf-8949-01985b9b9cad" (UID: "c6db0e66-e6a5-4aaf-8949-01985b9b9cad"). InnerVolumeSpecName "kube-api-access-qqf25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.027166 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6db0e66-e6a5-4aaf-8949-01985b9b9cad" (UID: "c6db0e66-e6a5-4aaf-8949-01985b9b9cad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.106414 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.106458 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.106494 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqf25\" (UniqueName: \"kubernetes.io/projected/c6db0e66-e6a5-4aaf-8949-01985b9b9cad-kube-api-access-qqf25\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.498288 4660 generic.go:334] "Generic (PLEG): container finished" podID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerID="7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb" exitCode=0 Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.498333 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jq8j5" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.498456 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerDied","Data":"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb"} Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.498512 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jq8j5" event={"ID":"c6db0e66-e6a5-4aaf-8949-01985b9b9cad","Type":"ContainerDied","Data":"00ef3363d432957157aa45524a48395f928f5470a504b435d7aa9ab59d6dab02"} Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.498606 4660 scope.go:117] "RemoveContainer" containerID="7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.523934 4660 scope.go:117] "RemoveContainer" containerID="f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.543801 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.561389 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jq8j5"] Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.603911 4660 scope.go:117] "RemoveContainer" containerID="ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.635048 4660 scope.go:117] "RemoveContainer" containerID="7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb" Jan 29 12:43:28 crc kubenswrapper[4660]: E0129 12:43:28.642465 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb\": container with ID starting with 7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb not found: ID does not exist" containerID="7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.642505 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb"} err="failed to get container status \"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb\": rpc error: code = NotFound desc = could not find container \"7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb\": container with ID starting with 7874e404fcb108eb55c9139042e6a60d8428423f2b6a0cd01af66ce21d58f2eb not found: ID does not exist" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.642530 4660 scope.go:117] "RemoveContainer" containerID="f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9" Jan 29 12:43:28 crc kubenswrapper[4660]: E0129 12:43:28.644085 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9\": container with ID starting with f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9 not found: ID does not exist" containerID="f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.644123 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9"} err="failed to get container status \"f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9\": rpc error: code = NotFound desc = could not find container \"f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9\": container with ID starting with f23298866b072a20973e9e6722e6c773115208803abf86225a67401bcf0ee1b9 not found: ID does not exist" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.644164 4660 scope.go:117] "RemoveContainer" containerID="ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c" Jan 29 12:43:28 crc kubenswrapper[4660]: E0129 12:43:28.644452 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c\": container with ID starting with ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c not found: ID does not exist" containerID="ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c" Jan 29 12:43:28 crc kubenswrapper[4660]: I0129 12:43:28.644478 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c"} err="failed to get container status \"ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c\": rpc error: code = NotFound desc = could not find container \"ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c\": container with ID starting with ba9faef73e96825a735fed8c2bbfb502da75cce4313db24ef01ce6c4b1cb855c not found: ID does not exist" Jan 29 12:43:29 crc kubenswrapper[4660]: I0129 12:43:29.479509 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" path="/var/lib/kubelet/pods/c6db0e66-e6a5-4aaf-8949-01985b9b9cad/volumes" Jan 29 12:43:42 crc kubenswrapper[4660]: I0129 12:43:42.470922 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:43:42 crc kubenswrapper[4660]: E0129 12:43:42.471913 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:43:56 crc kubenswrapper[4660]: I0129 12:43:56.470678 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:43:56 crc kubenswrapper[4660]: E0129 12:43:56.471513 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:44:09 crc kubenswrapper[4660]: I0129 12:44:09.470832 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:44:09 crc kubenswrapper[4660]: E0129 12:44:09.471959 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:44:22 crc kubenswrapper[4660]: I0129 12:44:22.470623 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:44:22 crc kubenswrapper[4660]: E0129 12:44:22.471458 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:44:36 crc kubenswrapper[4660]: I0129 12:44:36.469653 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:44:36 crc kubenswrapper[4660]: E0129 12:44:36.470537 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:44:51 crc kubenswrapper[4660]: I0129 12:44:51.469529 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:44:51 crc kubenswrapper[4660]: E0129 12:44:51.470301 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.159656 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts"] Jan 29 12:45:00 crc kubenswrapper[4660]: E0129 12:45:00.162092 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="extract-content" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.162115 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="extract-content" Jan 29 12:45:00 crc kubenswrapper[4660]: E0129 12:45:00.162134 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="extract-utilities" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.162143 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="extract-utilities" Jan 29 12:45:00 crc kubenswrapper[4660]: E0129 12:45:00.162165 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.162173 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.162330 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6db0e66-e6a5-4aaf-8949-01985b9b9cad" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.162958 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.167094 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.167101 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.180055 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts"] Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.235017 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.235110 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.235140 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kn2\" (UniqueName: \"kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.336677 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.336771 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.336797 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kn2\" (UniqueName: \"kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.337851 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.343850 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.355574 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kn2\" (UniqueName: \"kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2\") pod \"collect-profiles-29494845-kvqts\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.494975 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:00 crc kubenswrapper[4660]: I0129 12:45:00.947962 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts"] Jan 29 12:45:00 crc kubenswrapper[4660]: W0129 12:45:00.959936 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf68d5528_8910_4f1e_b413_ed505497a542.slice/crio-37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4 WatchSource:0}: Error finding container 37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4: Status 404 returned error can't find the container with id 37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4 Jan 29 12:45:01 crc kubenswrapper[4660]: I0129 12:45:01.368489 4660 generic.go:334] "Generic (PLEG): container finished" podID="f68d5528-8910-4f1e-b413-ed505497a542" containerID="9bc324670afe0cedb46eb7e9825cd0ae928653dc8ee4018eea3168446ec045e1" exitCode=0 Jan 29 12:45:01 crc kubenswrapper[4660]: I0129 12:45:01.368572 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" event={"ID":"f68d5528-8910-4f1e-b413-ed505497a542","Type":"ContainerDied","Data":"9bc324670afe0cedb46eb7e9825cd0ae928653dc8ee4018eea3168446ec045e1"} Jan 29 12:45:01 crc kubenswrapper[4660]: I0129 12:45:01.368829 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" event={"ID":"f68d5528-8910-4f1e-b413-ed505497a542","Type":"ContainerStarted","Data":"37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4"} Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.470643 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:45:02 crc kubenswrapper[4660]: E0129 12:45:02.471843 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.665856 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.775629 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume\") pod \"f68d5528-8910-4f1e-b413-ed505497a542\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.775730 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8kn2\" (UniqueName: \"kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2\") pod \"f68d5528-8910-4f1e-b413-ed505497a542\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.775879 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume\") pod \"f68d5528-8910-4f1e-b413-ed505497a542\" (UID: \"f68d5528-8910-4f1e-b413-ed505497a542\") " Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.776522 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume" (OuterVolumeSpecName: "config-volume") pod "f68d5528-8910-4f1e-b413-ed505497a542" (UID: "f68d5528-8910-4f1e-b413-ed505497a542"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.781390 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f68d5528-8910-4f1e-b413-ed505497a542" (UID: "f68d5528-8910-4f1e-b413-ed505497a542"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.782835 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2" (OuterVolumeSpecName: "kube-api-access-d8kn2") pod "f68d5528-8910-4f1e-b413-ed505497a542" (UID: "f68d5528-8910-4f1e-b413-ed505497a542"). InnerVolumeSpecName "kube-api-access-d8kn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.878244 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f68d5528-8910-4f1e-b413-ed505497a542-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.878298 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f68d5528-8910-4f1e-b413-ed505497a542-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:02 crc kubenswrapper[4660]: I0129 12:45:02.878322 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8kn2\" (UniqueName: \"kubernetes.io/projected/f68d5528-8910-4f1e-b413-ed505497a542-kube-api-access-d8kn2\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:03 crc kubenswrapper[4660]: I0129 12:45:03.384900 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" event={"ID":"f68d5528-8910-4f1e-b413-ed505497a542","Type":"ContainerDied","Data":"37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4"} Jan 29 12:45:03 crc kubenswrapper[4660]: I0129 12:45:03.385165 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f3652fa233337c0a988045a6a53df9aa9980d90cd3ee47d2b1e55ebf2677a4" Jan 29 12:45:03 crc kubenswrapper[4660]: I0129 12:45:03.384946 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-kvqts" Jan 29 12:45:03 crc kubenswrapper[4660]: I0129 12:45:03.744943 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss"] Jan 29 12:45:03 crc kubenswrapper[4660]: I0129 12:45:03.750926 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-96nss"] Jan 29 12:45:05 crc kubenswrapper[4660]: I0129 12:45:05.477416 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd77e369-3bfb-4bd7-aca5-441b93b3a2c8" path="/var/lib/kubelet/pods/fd77e369-3bfb-4bd7-aca5-441b93b3a2c8/volumes" Jan 29 12:45:16 crc kubenswrapper[4660]: I0129 12:45:16.469824 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:45:16 crc kubenswrapper[4660]: E0129 12:45:16.470481 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:45:17 crc kubenswrapper[4660]: I0129 12:45:17.082303 4660 scope.go:117] "RemoveContainer" containerID="ef971a7899770bea42b59d8d501ef40b0c07efc5fa0a25dfdf7a4086b9fde529" Jan 29 12:45:29 crc kubenswrapper[4660]: I0129 12:45:29.469890 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:45:29 crc kubenswrapper[4660]: E0129 12:45:29.471999 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:45:42 crc kubenswrapper[4660]: I0129 12:45:42.469673 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:45:42 crc kubenswrapper[4660]: E0129 12:45:42.470567 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:45:54 crc kubenswrapper[4660]: I0129 12:45:54.470082 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:45:54 crc kubenswrapper[4660]: E0129 12:45:54.470710 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:46:05 crc kubenswrapper[4660]: I0129 12:46:05.470535 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:46:05 crc kubenswrapper[4660]: E0129 12:46:05.471396 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:46:19 crc kubenswrapper[4660]: I0129 12:46:19.470201 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:46:19 crc kubenswrapper[4660]: E0129 12:46:19.471009 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:46:30 crc kubenswrapper[4660]: I0129 12:46:30.469753 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:46:30 crc kubenswrapper[4660]: E0129 12:46:30.470413 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:46:41 crc kubenswrapper[4660]: I0129 12:46:41.470244 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:46:41 crc kubenswrapper[4660]: E0129 12:46:41.471048 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:46:53 crc kubenswrapper[4660]: I0129 12:46:53.472822 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:46:53 crc kubenswrapper[4660]: E0129 12:46:53.473672 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:07 crc kubenswrapper[4660]: I0129 12:47:07.470898 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:47:07 crc kubenswrapper[4660]: E0129 12:47:07.471867 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:20 crc kubenswrapper[4660]: I0129 12:47:20.470444 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:47:20 crc kubenswrapper[4660]: E0129 12:47:20.471203 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:31 crc kubenswrapper[4660]: I0129 12:47:31.470389 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:47:31 crc kubenswrapper[4660]: E0129 12:47:31.472226 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:43 crc kubenswrapper[4660]: I0129 12:47:43.474176 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:47:43 crc kubenswrapper[4660]: E0129 12:47:43.474955 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.662196 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:45 crc kubenswrapper[4660]: E0129 12:47:45.662885 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f68d5528-8910-4f1e-b413-ed505497a542" containerName="collect-profiles" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.662905 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="f68d5528-8910-4f1e-b413-ed505497a542" containerName="collect-profiles" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.663080 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="f68d5528-8910-4f1e-b413-ed505497a542" containerName="collect-profiles" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.664398 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.699331 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.728617 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.728679 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fgw\" (UniqueName: \"kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.728883 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.829638 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.829738 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.829788 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8fgw\" (UniqueName: \"kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.830439 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.830649 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.847943 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8fgw\" (UniqueName: \"kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw\") pod \"certified-operators-xlj8w\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:45 crc kubenswrapper[4660]: I0129 12:47:45.983488 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:46 crc kubenswrapper[4660]: I0129 12:47:46.533516 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:47 crc kubenswrapper[4660]: I0129 12:47:47.531158 4660 generic.go:334] "Generic (PLEG): container finished" podID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerID="46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e" exitCode=0 Jan 29 12:47:47 crc kubenswrapper[4660]: I0129 12:47:47.531212 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerDied","Data":"46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e"} Jan 29 12:47:47 crc kubenswrapper[4660]: I0129 12:47:47.531444 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerStarted","Data":"026563d585431dbb846ca9ea0eef954c7815e32a75a06e3419fda7a5cecb11d2"} Jan 29 12:47:47 crc kubenswrapper[4660]: I0129 12:47:47.533407 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:47:48 crc kubenswrapper[4660]: I0129 12:47:48.538469 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerStarted","Data":"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5"} Jan 29 12:47:49 crc kubenswrapper[4660]: I0129 12:47:49.549622 4660 generic.go:334] "Generic (PLEG): container finished" podID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerID="48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5" exitCode=0 Jan 29 12:47:49 crc kubenswrapper[4660]: I0129 12:47:49.549666 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerDied","Data":"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5"} Jan 29 12:47:50 crc kubenswrapper[4660]: I0129 12:47:50.557903 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerStarted","Data":"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2"} Jan 29 12:47:50 crc kubenswrapper[4660]: I0129 12:47:50.579492 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xlj8w" podStartSLOduration=2.773250489 podStartE2EDuration="5.579461481s" podCreationTimestamp="2026-01-29 12:47:45 +0000 UTC" firstStartedPulling="2026-01-29 12:47:47.533198585 +0000 UTC m=+2504.756140707" lastFinishedPulling="2026-01-29 12:47:50.339409567 +0000 UTC m=+2507.562351699" observedRunningTime="2026-01-29 12:47:50.5734231 +0000 UTC m=+2507.796365232" watchObservedRunningTime="2026-01-29 12:47:50.579461481 +0000 UTC m=+2507.802403613" Jan 29 12:47:55 crc kubenswrapper[4660]: I0129 12:47:55.984098 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:55 crc kubenswrapper[4660]: I0129 12:47:55.985385 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:56 crc kubenswrapper[4660]: I0129 12:47:56.053990 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:56 crc kubenswrapper[4660]: I0129 12:47:56.636482 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:56 crc kubenswrapper[4660]: I0129 12:47:56.685407 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:57 crc kubenswrapper[4660]: I0129 12:47:57.470455 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:47:57 crc kubenswrapper[4660]: E0129 12:47:57.471897 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:47:58 crc kubenswrapper[4660]: I0129 12:47:58.616113 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xlj8w" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="registry-server" containerID="cri-o://6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2" gracePeriod=2 Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.019027 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.126764 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content\") pod \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.129923 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities\") pod \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.130049 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fgw\" (UniqueName: \"kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw\") pod \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\" (UID: \"7983090d-5d3d-4d1d-8e39-fffbce3b2edd\") " Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.130604 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities" (OuterVolumeSpecName: "utilities") pod "7983090d-5d3d-4d1d-8e39-fffbce3b2edd" (UID: "7983090d-5d3d-4d1d-8e39-fffbce3b2edd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.138062 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw" (OuterVolumeSpecName: "kube-api-access-j8fgw") pod "7983090d-5d3d-4d1d-8e39-fffbce3b2edd" (UID: "7983090d-5d3d-4d1d-8e39-fffbce3b2edd"). InnerVolumeSpecName "kube-api-access-j8fgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.197968 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7983090d-5d3d-4d1d-8e39-fffbce3b2edd" (UID: "7983090d-5d3d-4d1d-8e39-fffbce3b2edd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.232120 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.232173 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.232187 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8fgw\" (UniqueName: \"kubernetes.io/projected/7983090d-5d3d-4d1d-8e39-fffbce3b2edd-kube-api-access-j8fgw\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.625297 4660 generic.go:334] "Generic (PLEG): container finished" podID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerID="6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2" exitCode=0 Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.625342 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerDied","Data":"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2"} Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.625388 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xlj8w" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.625406 4660 scope.go:117] "RemoveContainer" containerID="6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.625393 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xlj8w" event={"ID":"7983090d-5d3d-4d1d-8e39-fffbce3b2edd","Type":"ContainerDied","Data":"026563d585431dbb846ca9ea0eef954c7815e32a75a06e3419fda7a5cecb11d2"} Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.651652 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.656627 4660 scope.go:117] "RemoveContainer" containerID="48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.660719 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xlj8w"] Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.673127 4660 scope.go:117] "RemoveContainer" containerID="46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.698796 4660 scope.go:117] "RemoveContainer" containerID="6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2" Jan 29 12:47:59 crc kubenswrapper[4660]: E0129 12:47:59.699278 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2\": container with ID starting with 6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2 not found: ID does not exist" containerID="6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.699310 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2"} err="failed to get container status \"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2\": rpc error: code = NotFound desc = could not find container \"6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2\": container with ID starting with 6c60dab8c603ae88007ea12ed29c2e6e982ed523946b2c1fd6ff0f28916306b2 not found: ID does not exist" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.699331 4660 scope.go:117] "RemoveContainer" containerID="48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5" Jan 29 12:47:59 crc kubenswrapper[4660]: E0129 12:47:59.699762 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5\": container with ID starting with 48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5 not found: ID does not exist" containerID="48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.699796 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5"} err="failed to get container status \"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5\": rpc error: code = NotFound desc = could not find container \"48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5\": container with ID starting with 48ebf5a410ccca25dc2a963342b4c1830a57267656dc8a9b2e8a4140f00a29c5 not found: ID does not exist" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.699817 4660 scope.go:117] "RemoveContainer" containerID="46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e" Jan 29 12:47:59 crc kubenswrapper[4660]: E0129 12:47:59.700101 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e\": container with ID starting with 46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e not found: ID does not exist" containerID="46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e" Jan 29 12:47:59 crc kubenswrapper[4660]: I0129 12:47:59.700126 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e"} err="failed to get container status \"46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e\": rpc error: code = NotFound desc = could not find container \"46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e\": container with ID starting with 46d0f5d9eb607e5566cf2e5b2622438dfbda2093fd2c23ad0ca8dbcb1b10ad4e not found: ID does not exist" Jan 29 12:48:01 crc kubenswrapper[4660]: I0129 12:48:01.478912 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" path="/var/lib/kubelet/pods/7983090d-5d3d-4d1d-8e39-fffbce3b2edd/volumes" Jan 29 12:48:12 crc kubenswrapper[4660]: I0129 12:48:12.469757 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:48:12 crc kubenswrapper[4660]: E0129 12:48:12.470547 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:48:24 crc kubenswrapper[4660]: I0129 12:48:24.469838 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:48:24 crc kubenswrapper[4660]: E0129 12:48:24.470313 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:48:38 crc kubenswrapper[4660]: I0129 12:48:38.470245 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:48:38 crc kubenswrapper[4660]: I0129 12:48:38.935315 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d"} Jan 29 12:50:56 crc kubenswrapper[4660]: I0129 12:50:56.269731 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:50:56 crc kubenswrapper[4660]: I0129 12:50:56.270343 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:51:26 crc kubenswrapper[4660]: I0129 12:51:26.269395 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:51:26 crc kubenswrapper[4660]: I0129 12:51:26.270031 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.269658 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.270324 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.270376 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.271107 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.271166 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d" gracePeriod=600 Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.521575 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d" exitCode=0 Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.521624 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d"} Jan 29 12:51:56 crc kubenswrapper[4660]: I0129 12:51:56.521661 4660 scope.go:117] "RemoveContainer" containerID="88937743b5b5c353a8447927453439de8f37b40f916c72a114a04cac174be9ae" Jan 29 12:51:57 crc kubenswrapper[4660]: I0129 12:51:57.530932 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351"} Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.070164 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zr55v/must-gather-hg7fs"] Jan 29 12:52:08 crc kubenswrapper[4660]: E0129 12:52:08.070794 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="extract-content" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.070805 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="extract-content" Jan 29 12:52:08 crc kubenswrapper[4660]: E0129 12:52:08.070812 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="registry-server" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.070818 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="registry-server" Jan 29 12:52:08 crc kubenswrapper[4660]: E0129 12:52:08.070835 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="extract-utilities" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.070841 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="extract-utilities" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.070979 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="7983090d-5d3d-4d1d-8e39-fffbce3b2edd" containerName="registry-server" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.071601 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.073975 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zr55v"/"openshift-service-ca.crt" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.074494 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zr55v"/"kube-root-ca.crt" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.075386 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zr55v"/"default-dockercfg-9h7vb" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.085195 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.085258 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wmd\" (UniqueName: \"kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.087047 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zr55v/must-gather-hg7fs"] Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.186391 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wmd\" (UniqueName: \"kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.186507 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.187094 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.211933 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wmd\" (UniqueName: \"kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd\") pod \"must-gather-hg7fs\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.432485 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:52:08 crc kubenswrapper[4660]: I0129 12:52:08.864625 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zr55v/must-gather-hg7fs"] Jan 29 12:52:09 crc kubenswrapper[4660]: I0129 12:52:09.623480 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zr55v/must-gather-hg7fs" event={"ID":"69ad4983-667d-4021-89eb-e3145bd9b2df","Type":"ContainerStarted","Data":"72c4dce15e0ec636c1ce39a4d2e5c56d554aa1d07eb15ea7bbde329c7f86ed93"} Jan 29 12:52:17 crc kubenswrapper[4660]: I0129 12:52:17.700448 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zr55v/must-gather-hg7fs" event={"ID":"69ad4983-667d-4021-89eb-e3145bd9b2df","Type":"ContainerStarted","Data":"38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f"} Jan 29 12:52:17 crc kubenswrapper[4660]: I0129 12:52:17.701026 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zr55v/must-gather-hg7fs" event={"ID":"69ad4983-667d-4021-89eb-e3145bd9b2df","Type":"ContainerStarted","Data":"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e"} Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.521181 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zr55v/must-gather-hg7fs" podStartSLOduration=3.837569763 podStartE2EDuration="11.521162042s" podCreationTimestamp="2026-01-29 12:52:08 +0000 UTC" firstStartedPulling="2026-01-29 12:52:08.85194816 +0000 UTC m=+2766.074890292" lastFinishedPulling="2026-01-29 12:52:16.535540439 +0000 UTC m=+2773.758482571" observedRunningTime="2026-01-29 12:52:17.719910441 +0000 UTC m=+2774.942852573" watchObservedRunningTime="2026-01-29 12:52:19.521162042 +0000 UTC m=+2776.744104164" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.527165 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.528930 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.547276 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.564243 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.564349 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.564413 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz62\" (UniqueName: \"kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.665924 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.665991 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.666115 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz62\" (UniqueName: \"kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.666787 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.666800 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.691582 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz62\" (UniqueName: \"kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62\") pod \"community-operators-m22sh\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:19 crc kubenswrapper[4660]: I0129 12:52:19.845817 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.122384 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.130415 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.135488 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.177936 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.178024 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcxr\" (UniqueName: \"kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.178091 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.279515 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.279578 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcxr\" (UniqueName: \"kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.279670 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.280123 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.280136 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.312830 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcxr\" (UniqueName: \"kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr\") pod \"redhat-operators-lrkcv\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.385928 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.453894 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.720634 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerStarted","Data":"88cf552d5948413933571328aaa62e734021735107e33fc78caffee9aa7791f0"} Jan 29 12:52:20 crc kubenswrapper[4660]: W0129 12:52:20.917532 4660 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode528e323_93f5_4e88_89a8_5080de0cd260.slice/crio-568a9fc6060c2aee427598ac3adf753218c4e7c1e2f772912abcff9812051aac WatchSource:0}: Error finding container 568a9fc6060c2aee427598ac3adf753218c4e7c1e2f772912abcff9812051aac: Status 404 returned error can't find the container with id 568a9fc6060c2aee427598ac3adf753218c4e7c1e2f772912abcff9812051aac Jan 29 12:52:20 crc kubenswrapper[4660]: I0129 12:52:20.918497 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:21 crc kubenswrapper[4660]: I0129 12:52:21.731702 4660 generic.go:334] "Generic (PLEG): container finished" podID="e528e323-93f5-4e88-89a8-5080de0cd260" containerID="21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1" exitCode=0 Jan 29 12:52:21 crc kubenswrapper[4660]: I0129 12:52:21.731807 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerDied","Data":"21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1"} Jan 29 12:52:21 crc kubenswrapper[4660]: I0129 12:52:21.732032 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerStarted","Data":"568a9fc6060c2aee427598ac3adf753218c4e7c1e2f772912abcff9812051aac"} Jan 29 12:52:21 crc kubenswrapper[4660]: I0129 12:52:21.735148 4660 generic.go:334] "Generic (PLEG): container finished" podID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerID="187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256" exitCode=0 Jan 29 12:52:21 crc kubenswrapper[4660]: I0129 12:52:21.735205 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerDied","Data":"187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256"} Jan 29 12:52:23 crc kubenswrapper[4660]: I0129 12:52:23.749039 4660 generic.go:334] "Generic (PLEG): container finished" podID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerID="2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366" exitCode=0 Jan 29 12:52:23 crc kubenswrapper[4660]: I0129 12:52:23.749347 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerDied","Data":"2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366"} Jan 29 12:52:25 crc kubenswrapper[4660]: I0129 12:52:25.762053 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerStarted","Data":"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c"} Jan 29 12:52:25 crc kubenswrapper[4660]: I0129 12:52:25.797315 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m22sh" podStartSLOduration=3.277837937 podStartE2EDuration="6.79729488s" podCreationTimestamp="2026-01-29 12:52:19 +0000 UTC" firstStartedPulling="2026-01-29 12:52:21.737083399 +0000 UTC m=+2778.960025531" lastFinishedPulling="2026-01-29 12:52:25.256540342 +0000 UTC m=+2782.479482474" observedRunningTime="2026-01-29 12:52:25.792579255 +0000 UTC m=+2783.015521397" watchObservedRunningTime="2026-01-29 12:52:25.79729488 +0000 UTC m=+2783.020237022" Jan 29 12:52:27 crc kubenswrapper[4660]: I0129 12:52:27.777457 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerStarted","Data":"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee"} Jan 29 12:52:28 crc kubenswrapper[4660]: I0129 12:52:28.784918 4660 generic.go:334] "Generic (PLEG): container finished" podID="e528e323-93f5-4e88-89a8-5080de0cd260" containerID="b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee" exitCode=0 Jan 29 12:52:28 crc kubenswrapper[4660]: I0129 12:52:28.784959 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerDied","Data":"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee"} Jan 29 12:52:29 crc kubenswrapper[4660]: I0129 12:52:29.846493 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:29 crc kubenswrapper[4660]: I0129 12:52:29.847629 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:29 crc kubenswrapper[4660]: I0129 12:52:29.909241 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:30 crc kubenswrapper[4660]: I0129 12:52:30.845254 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:33 crc kubenswrapper[4660]: I0129 12:52:33.713803 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:33 crc kubenswrapper[4660]: I0129 12:52:33.815793 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m22sh" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="registry-server" containerID="cri-o://767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c" gracePeriod=2 Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.800534 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.854684 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerStarted","Data":"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6"} Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.857158 4660 generic.go:334] "Generic (PLEG): container finished" podID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerID="767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c" exitCode=0 Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.857199 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerDied","Data":"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c"} Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.857227 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m22sh" event={"ID":"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48","Type":"ContainerDied","Data":"88cf552d5948413933571328aaa62e734021735107e33fc78caffee9aa7791f0"} Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.857247 4660 scope.go:117] "RemoveContainer" containerID="767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.857245 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m22sh" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.879610 4660 scope.go:117] "RemoveContainer" containerID="2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.888615 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lrkcv" podStartSLOduration=2.4026483020000002 podStartE2EDuration="17.888589634s" podCreationTimestamp="2026-01-29 12:52:20 +0000 UTC" firstStartedPulling="2026-01-29 12:52:21.734105064 +0000 UTC m=+2778.957047196" lastFinishedPulling="2026-01-29 12:52:37.220046396 +0000 UTC m=+2794.442988528" observedRunningTime="2026-01-29 12:52:37.869743474 +0000 UTC m=+2795.092685606" watchObservedRunningTime="2026-01-29 12:52:37.888589634 +0000 UTC m=+2795.111531776" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.912826 4660 scope.go:117] "RemoveContainer" containerID="187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.930591 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities\") pod \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.930679 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content\") pod \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.930751 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qz62\" (UniqueName: \"kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62\") pod \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\" (UID: \"cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48\") " Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.931671 4660 scope.go:117] "RemoveContainer" containerID="767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c" Jan 29 12:52:37 crc kubenswrapper[4660]: E0129 12:52:37.932038 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c\": container with ID starting with 767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c not found: ID does not exist" containerID="767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.932064 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c"} err="failed to get container status \"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c\": rpc error: code = NotFound desc = could not find container \"767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c\": container with ID starting with 767cbb8e6ef630586fa9e46b17798ade27d4db5c1d90b0db2b9143d5cf72455c not found: ID does not exist" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.932090 4660 scope.go:117] "RemoveContainer" containerID="2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366" Jan 29 12:52:37 crc kubenswrapper[4660]: E0129 12:52:37.932287 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366\": container with ID starting with 2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366 not found: ID does not exist" containerID="2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.932306 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366"} err="failed to get container status \"2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366\": rpc error: code = NotFound desc = could not find container \"2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366\": container with ID starting with 2e3fff315a038a402368830a868f9d127ad120bdbd29227ff7ff228a646cf366 not found: ID does not exist" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.932321 4660 scope.go:117] "RemoveContainer" containerID="187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256" Jan 29 12:52:37 crc kubenswrapper[4660]: E0129 12:52:37.932515 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256\": container with ID starting with 187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256 not found: ID does not exist" containerID="187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.932535 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256"} err="failed to get container status \"187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256\": rpc error: code = NotFound desc = could not find container \"187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256\": container with ID starting with 187bc644a74ad12be201b600aa6d5da810c3983a3471c903368c805eba897256 not found: ID does not exist" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.933170 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities" (OuterVolumeSpecName: "utilities") pod "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" (UID: "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.947204 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62" (OuterVolumeSpecName: "kube-api-access-7qz62") pod "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" (UID: "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48"). InnerVolumeSpecName "kube-api-access-7qz62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:52:37 crc kubenswrapper[4660]: I0129 12:52:37.992119 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" (UID: "cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:38 crc kubenswrapper[4660]: I0129 12:52:38.031540 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:38 crc kubenswrapper[4660]: I0129 12:52:38.032056 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:38 crc kubenswrapper[4660]: I0129 12:52:38.032129 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qz62\" (UniqueName: \"kubernetes.io/projected/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48-kube-api-access-7qz62\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:38 crc kubenswrapper[4660]: I0129 12:52:38.188012 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:38 crc kubenswrapper[4660]: I0129 12:52:38.193167 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m22sh"] Jan 29 12:52:39 crc kubenswrapper[4660]: I0129 12:52:39.478612 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" path="/var/lib/kubelet/pods/cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48/volumes" Jan 29 12:52:40 crc kubenswrapper[4660]: I0129 12:52:40.455012 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:40 crc kubenswrapper[4660]: I0129 12:52:40.455461 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:41 crc kubenswrapper[4660]: I0129 12:52:41.504869 4660 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lrkcv" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="registry-server" probeResult="failure" output=< Jan 29 12:52:41 crc kubenswrapper[4660]: timeout: failed to connect service ":50051" within 1s Jan 29 12:52:41 crc kubenswrapper[4660]: > Jan 29 12:52:50 crc kubenswrapper[4660]: I0129 12:52:50.498611 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:50 crc kubenswrapper[4660]: I0129 12:52:50.544101 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:50 crc kubenswrapper[4660]: I0129 12:52:50.730770 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:51 crc kubenswrapper[4660]: I0129 12:52:51.951488 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lrkcv" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="registry-server" containerID="cri-o://f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6" gracePeriod=2 Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.322789 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.421219 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxcxr\" (UniqueName: \"kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr\") pod \"e528e323-93f5-4e88-89a8-5080de0cd260\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.421339 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content\") pod \"e528e323-93f5-4e88-89a8-5080de0cd260\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.421380 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities\") pod \"e528e323-93f5-4e88-89a8-5080de0cd260\" (UID: \"e528e323-93f5-4e88-89a8-5080de0cd260\") " Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.422419 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities" (OuterVolumeSpecName: "utilities") pod "e528e323-93f5-4e88-89a8-5080de0cd260" (UID: "e528e323-93f5-4e88-89a8-5080de0cd260"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.426896 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr" (OuterVolumeSpecName: "kube-api-access-qxcxr") pod "e528e323-93f5-4e88-89a8-5080de0cd260" (UID: "e528e323-93f5-4e88-89a8-5080de0cd260"). InnerVolumeSpecName "kube-api-access-qxcxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.522470 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.522510 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxcxr\" (UniqueName: \"kubernetes.io/projected/e528e323-93f5-4e88-89a8-5080de0cd260-kube-api-access-qxcxr\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.551557 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e528e323-93f5-4e88-89a8-5080de0cd260" (UID: "e528e323-93f5-4e88-89a8-5080de0cd260"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.623760 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e528e323-93f5-4e88-89a8-5080de0cd260-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.959404 4660 generic.go:334] "Generic (PLEG): container finished" podID="e528e323-93f5-4e88-89a8-5080de0cd260" containerID="f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6" exitCode=0 Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.959457 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerDied","Data":"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6"} Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.959523 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lrkcv" event={"ID":"e528e323-93f5-4e88-89a8-5080de0cd260","Type":"ContainerDied","Data":"568a9fc6060c2aee427598ac3adf753218c4e7c1e2f772912abcff9812051aac"} Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.959548 4660 scope.go:117] "RemoveContainer" containerID="f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.959468 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lrkcv" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.979573 4660 scope.go:117] "RemoveContainer" containerID="b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee" Jan 29 12:52:52 crc kubenswrapper[4660]: I0129 12:52:52.996385 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.002193 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lrkcv"] Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.011399 4660 scope.go:117] "RemoveContainer" containerID="21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.028638 4660 scope.go:117] "RemoveContainer" containerID="f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6" Jan 29 12:52:53 crc kubenswrapper[4660]: E0129 12:52:53.029160 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6\": container with ID starting with f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6 not found: ID does not exist" containerID="f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.029200 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6"} err="failed to get container status \"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6\": rpc error: code = NotFound desc = could not find container \"f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6\": container with ID starting with f6e1e56983eb62162288e5cbe018e99b0f8fb25514e45f7fefb656c6a1d303e6 not found: ID does not exist" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.029221 4660 scope.go:117] "RemoveContainer" containerID="b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee" Jan 29 12:52:53 crc kubenswrapper[4660]: E0129 12:52:53.029460 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee\": container with ID starting with b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee not found: ID does not exist" containerID="b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.029486 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee"} err="failed to get container status \"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee\": rpc error: code = NotFound desc = could not find container \"b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee\": container with ID starting with b2f221e41e92c1a4104f1720bc744506a13aa5ea7f86f934b73c4c41b4b1e2ee not found: ID does not exist" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.029500 4660 scope.go:117] "RemoveContainer" containerID="21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1" Jan 29 12:52:53 crc kubenswrapper[4660]: E0129 12:52:53.030046 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1\": container with ID starting with 21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1 not found: ID does not exist" containerID="21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.030071 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1"} err="failed to get container status \"21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1\": rpc error: code = NotFound desc = could not find container \"21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1\": container with ID starting with 21ca67f3f49f760e993e6c29c5613a3dac3d7528475810c533e3873d9ca672f1 not found: ID does not exist" Jan 29 12:52:53 crc kubenswrapper[4660]: I0129 12:52:53.477978 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" path="/var/lib/kubelet/pods/e528e323-93f5-4e88-89a8-5080de0cd260/volumes" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.054365 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-657667746d-nlj9n_531184c9-ac70-494d-9efd-19d8a9022f32/manager/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.272669 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/util/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.447216 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/util/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.513411 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/pull/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.536945 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/pull/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.707953 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/util/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.710563 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/pull/0.log" Jan 29 12:53:22 crc kubenswrapper[4660]: I0129 12:53:22.745960 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66ft2qbb_8b73c58d-8e49-4f14-98a0-a67114e62ff3/extract/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.076842 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55d5d5f8ff-jcsnd_6cb62294-b79d-4b84-b197-54a4ab0eeb50/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.115488 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7595cf584-vhg9t_65751935-41e4-46ae-9cc8-c4e5d4193425/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.250999 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-p8t4z_2604f568-e449-450f-8c55-ab4d25510d85/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.341001 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5499bccc75-2g5t6_6dfcb8d5-eaa9-44ad-81e6-e2243bf5083d/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.482873 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-xnhss_693c5d51-c352-44fa-bbe8-8cd0ca86b80b/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.555338 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-jstjj_379c54b4-ce54-4a69-8c0e-722fa84ed09f/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.690562 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-56cb7c4b4c-n5x67_0ea29cd5-b44c-4d84-ad66-3360df645d54/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.768817 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-77bb7ffb8c-c2w2h_b5ec2d08-e2cd-4103-bbab-63de4ecc5902/manager/0.log" Jan 29 12:53:23 crc kubenswrapper[4660]: I0129 12:53:23.900570 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6475bdcbc4-hqvr9_fe8323d4-90f2-455d-8198-de7b1918f1ae/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.012423 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-lbsb9_a2bffb25-e078-4a03-9875-d8a154991b1e/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.118569 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-55df775b69-4pcpn_a39fd043-26b6-4d3a-99ae-920c9b0664c0/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.243741 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ccd5b7f8f-ncfnn_6ecfd6c0-ee18-43ef-a3d8-85db7dcfcd00/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.358611 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6b855b4fc4-gmw7z_16fea8b6-0800-4b2f-abae-8ccbb97dee90/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.445476 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dfgcc9_cb8b5e12-4b12-4d5c-b580-faa4aa0140fe/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.692338 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-59c8666fb5-rrkpq_8d8f5a32-f4d8-409a-9daa-99522117fad6/operator/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.729774 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65dc8f5954-v6vw6_c301e3ae-90ee-4c00-86be-1e7990da739c/manager/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.870860 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jh4bg_90b0871c-024f-4dbf-8741-a22dd98b1a5c/registry-server/0.log" Jan 29 12:53:24 crc kubenswrapper[4660]: I0129 12:53:24.985776 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-sgm4h_ae21e403-c97f-4a6b-bb36-867168ab3f60/manager/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.154880 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-dbf4c_935fa2bb-c3f3-47f1-a316-96b0df84aedc/manager/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.239464 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w585v_339f7c0f-fb9f-4ce8-a2be-eb94620e67e8/operator/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.355807 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6f7455757b-g2z99_78da1eca-6a33-4825-a671-a348c42a5f3e/manager/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.510969 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c95fd9dc5-f4gll_6fc68dc9-a2bd-48a5-b31d-a29ca15489d8/manager/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.638450 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4vfnd_4abdab31-b35e-415e-b9f3-d1f014624f1b/manager/0.log" Jan 29 12:53:25 crc kubenswrapper[4660]: I0129 12:53:25.723734 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-56b5dc77fd-zbpsq_b5d2675b-f392-41fc-8d46-2f7c40e7d69d/manager/0.log" Jan 29 12:53:44 crc kubenswrapper[4660]: I0129 12:53:44.968458 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4s8m8_7244f40b-2b72-48e2-bd02-5fdc718a460b/control-plane-machine-set-operator/0.log" Jan 29 12:53:45 crc kubenswrapper[4660]: I0129 12:53:45.120340 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7dvlp_1afa8f6d-9033-41f9-b30c-4ce3b4b56399/kube-rbac-proxy/0.log" Jan 29 12:53:45 crc kubenswrapper[4660]: I0129 12:53:45.162506 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7dvlp_1afa8f6d-9033-41f9-b30c-4ce3b4b56399/machine-api-operator/0.log" Jan 29 12:53:57 crc kubenswrapper[4660]: I0129 12:53:57.035059 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-c9r5g_fb17e443-58ad-4928-9781-b9e041b9b5d9/cert-manager-cainjector/0.log" Jan 29 12:53:57 crc kubenswrapper[4660]: I0129 12:53:57.037550 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ljhct_54caec82-1193-4ecb-a591-48fbe5587225/cert-manager-controller/0.log" Jan 29 12:53:57 crc kubenswrapper[4660]: I0129 12:53:57.088333 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4nd9k_52340342-62d7-46e4-af31-d17f8a4bed1e/cert-manager-webhook/0.log" Jan 29 12:54:08 crc kubenswrapper[4660]: I0129 12:54:08.629112 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-q564x_d7fab196-9704-4991-8c67-8e0cadd2d4b5/nmstate-console-plugin/0.log" Jan 29 12:54:08 crc kubenswrapper[4660]: I0129 12:54:08.797180 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-l7psx_0289a953-d506-42a8-89ff-fb018ab0d5cd/nmstate-handler/0.log" Jan 29 12:54:08 crc kubenswrapper[4660]: I0129 12:54:08.844336 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5qnsn_7c673176-aa01-4a2f-b319-f81e28800e05/kube-rbac-proxy/0.log" Jan 29 12:54:08 crc kubenswrapper[4660]: I0129 12:54:08.961249 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-5qnsn_7c673176-aa01-4a2f-b319-f81e28800e05/nmstate-metrics/0.log" Jan 29 12:54:09 crc kubenswrapper[4660]: I0129 12:54:09.042487 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-55lhd_86c25304-5d7d-46c2-b033-0c225c08f448/nmstate-operator/0.log" Jan 29 12:54:09 crc kubenswrapper[4660]: I0129 12:54:09.141673 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-j8hgw_87245509-8882-4337-88dc-9300b488472d/nmstate-webhook/0.log" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.537631 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538408 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="extract-content" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538421 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="extract-content" Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538435 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="extract-content" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538442 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="extract-content" Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538452 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538458 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538471 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="extract-utilities" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538476 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="extract-utilities" Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538493 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538498 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: E0129 12:54:23.538510 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="extract-utilities" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538515 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="extract-utilities" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538630 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9a38ae-caf0-47ee-9e5b-0af2a30b0a48" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.538645 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="e528e323-93f5-4e88-89a8-5080de0cd260" containerName="registry-server" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.539650 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.558066 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.635412 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.635491 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.635534 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqv6\" (UniqueName: \"kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.736793 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.737107 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.737220 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptqv6\" (UniqueName: \"kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.737264 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.737545 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.762734 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptqv6\" (UniqueName: \"kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6\") pod \"redhat-marketplace-k7xdf\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:23 crc kubenswrapper[4660]: I0129 12:54:23.862932 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:24 crc kubenswrapper[4660]: I0129 12:54:24.338601 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:24 crc kubenswrapper[4660]: I0129 12:54:24.552856 4660 generic.go:334] "Generic (PLEG): container finished" podID="16f81985-302b-4463-9c75-62483c05afa3" containerID="11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a" exitCode=0 Jan 29 12:54:24 crc kubenswrapper[4660]: I0129 12:54:24.552905 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerDied","Data":"11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a"} Jan 29 12:54:24 crc kubenswrapper[4660]: I0129 12:54:24.552935 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerStarted","Data":"d13a71c85378d37a28aa44106322db36f78e0ba251cfd68f0efd1d1c95613f09"} Jan 29 12:54:24 crc kubenswrapper[4660]: I0129 12:54:24.554575 4660 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:54:25 crc kubenswrapper[4660]: I0129 12:54:25.559944 4660 generic.go:334] "Generic (PLEG): container finished" podID="16f81985-302b-4463-9c75-62483c05afa3" containerID="8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a" exitCode=0 Jan 29 12:54:25 crc kubenswrapper[4660]: I0129 12:54:25.560001 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerDied","Data":"8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a"} Jan 29 12:54:26 crc kubenswrapper[4660]: I0129 12:54:26.269119 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:54:26 crc kubenswrapper[4660]: I0129 12:54:26.269405 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:54:26 crc kubenswrapper[4660]: I0129 12:54:26.568046 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerStarted","Data":"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b"} Jan 29 12:54:26 crc kubenswrapper[4660]: I0129 12:54:26.588067 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7xdf" podStartSLOduration=2.117355411 podStartE2EDuration="3.588048773s" podCreationTimestamp="2026-01-29 12:54:23 +0000 UTC" firstStartedPulling="2026-01-29 12:54:24.554387207 +0000 UTC m=+2901.777329339" lastFinishedPulling="2026-01-29 12:54:26.025080549 +0000 UTC m=+2903.248022701" observedRunningTime="2026-01-29 12:54:26.584596104 +0000 UTC m=+2903.807538236" watchObservedRunningTime="2026-01-29 12:54:26.588048773 +0000 UTC m=+2903.810990895" Jan 29 12:54:33 crc kubenswrapper[4660]: I0129 12:54:33.863992 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:33 crc kubenswrapper[4660]: I0129 12:54:33.864548 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:33 crc kubenswrapper[4660]: I0129 12:54:33.906441 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:34 crc kubenswrapper[4660]: I0129 12:54:34.667069 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:34 crc kubenswrapper[4660]: I0129 12:54:34.715720 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:35 crc kubenswrapper[4660]: I0129 12:54:35.574192 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mpn6c_2d971de7-678b-494a-b438-20dfd769dec8/controller/0.log" Jan 29 12:54:35 crc kubenswrapper[4660]: I0129 12:54:35.621170 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-mpn6c_2d971de7-678b-494a-b438-20dfd769dec8/kube-rbac-proxy/0.log" Jan 29 12:54:35 crc kubenswrapper[4660]: I0129 12:54:35.833229 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-x2p5k_67ec6fe2-fa0d-4c21-a12b-79e2a2e4d9b4/frr-k8s-webhook-server/0.log" Jan 29 12:54:35 crc kubenswrapper[4660]: I0129 12:54:35.960445 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-frr-files/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.136438 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-frr-files/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.144417 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-metrics/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.152517 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-reloader/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.206513 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-reloader/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.458729 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-reloader/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.467298 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-frr-files/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.470196 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-metrics/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.491818 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-metrics/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.636570 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7xdf" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="registry-server" containerID="cri-o://48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b" gracePeriod=2 Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.699442 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-metrics/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.712946 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-reloader/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.715331 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/cp-frr-files/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.745367 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/controller/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.909523 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/kube-rbac-proxy/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.932267 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/frr-metrics/0.log" Jan 29 12:54:36 crc kubenswrapper[4660]: I0129 12:54:36.978060 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/kube-rbac-proxy-frr/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.170347 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/frr/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.217112 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z2bhd_e13bbd49-3f1c-4235-988b-001247a4f125/reloader/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.252495 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b8955cf6-tjmck_3d7688d4-6ab1-40b9-aadc-08ca5bb4be13/manager/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.420053 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5964957fbf-px6c4_2a0050d9-566a-4127-ae73-093fe7fcef53/webhook-server/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.511462 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m8vmm_325bb691-ed31-439a-8a6c-b244152fce18/kube-rbac-proxy/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.574040 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.646092 4660 generic.go:334] "Generic (PLEG): container finished" podID="16f81985-302b-4463-9c75-62483c05afa3" containerID="48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b" exitCode=0 Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.646140 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerDied","Data":"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b"} Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.646153 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7xdf" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.646168 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7xdf" event={"ID":"16f81985-302b-4463-9c75-62483c05afa3","Type":"ContainerDied","Data":"d13a71c85378d37a28aa44106322db36f78e0ba251cfd68f0efd1d1c95613f09"} Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.646188 4660 scope.go:117] "RemoveContainer" containerID="48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.665824 4660 scope.go:117] "RemoveContainer" containerID="8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.694584 4660 scope.go:117] "RemoveContainer" containerID="11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.715245 4660 scope.go:117] "RemoveContainer" containerID="48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b" Jan 29 12:54:37 crc kubenswrapper[4660]: E0129 12:54:37.718816 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b\": container with ID starting with 48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b not found: ID does not exist" containerID="48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.718848 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b"} err="failed to get container status \"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b\": rpc error: code = NotFound desc = could not find container \"48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b\": container with ID starting with 48eb0d47f519c64bcfdade017b5956b1681d5cb28cc25d57552a750d01aa6d6b not found: ID does not exist" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.718868 4660 scope.go:117] "RemoveContainer" containerID="8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a" Jan 29 12:54:37 crc kubenswrapper[4660]: E0129 12:54:37.719184 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a\": container with ID starting with 8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a not found: ID does not exist" containerID="8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.719204 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a"} err="failed to get container status \"8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a\": rpc error: code = NotFound desc = could not find container \"8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a\": container with ID starting with 8923dff5dda2c5353e136a8deb26e478287d76f85e47dc0ad9e055dddbfc8d3a not found: ID does not exist" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.719218 4660 scope.go:117] "RemoveContainer" containerID="11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a" Jan 29 12:54:37 crc kubenswrapper[4660]: E0129 12:54:37.719428 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a\": container with ID starting with 11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a not found: ID does not exist" containerID="11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.719445 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a"} err="failed to get container status \"11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a\": rpc error: code = NotFound desc = could not find container \"11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a\": container with ID starting with 11b0db3a8a43d7006e82287cf6132db6d289ff50bc2c0fa6237cacea25ef0e9a not found: ID does not exist" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.754172 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content\") pod \"16f81985-302b-4463-9c75-62483c05afa3\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.754234 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities\") pod \"16f81985-302b-4463-9c75-62483c05afa3\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.754263 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptqv6\" (UniqueName: \"kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6\") pod \"16f81985-302b-4463-9c75-62483c05afa3\" (UID: \"16f81985-302b-4463-9c75-62483c05afa3\") " Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.755132 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities" (OuterVolumeSpecName: "utilities") pod "16f81985-302b-4463-9c75-62483c05afa3" (UID: "16f81985-302b-4463-9c75-62483c05afa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.759137 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6" (OuterVolumeSpecName: "kube-api-access-ptqv6") pod "16f81985-302b-4463-9c75-62483c05afa3" (UID: "16f81985-302b-4463-9c75-62483c05afa3"). InnerVolumeSpecName "kube-api-access-ptqv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.779423 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16f81985-302b-4463-9c75-62483c05afa3" (UID: "16f81985-302b-4463-9c75-62483c05afa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.856043 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.856083 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16f81985-302b-4463-9c75-62483c05afa3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.856094 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptqv6\" (UniqueName: \"kubernetes.io/projected/16f81985-302b-4463-9c75-62483c05afa3-kube-api-access-ptqv6\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.932705 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m8vmm_325bb691-ed31-439a-8a6c-b244152fce18/speaker/0.log" Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.981188 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:37 crc kubenswrapper[4660]: I0129 12:54:37.987655 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7xdf"] Jan 29 12:54:39 crc kubenswrapper[4660]: I0129 12:54:39.477452 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f81985-302b-4463-9c75-62483c05afa3" path="/var/lib/kubelet/pods/16f81985-302b-4463-9c75-62483c05afa3/volumes" Jan 29 12:54:49 crc kubenswrapper[4660]: I0129 12:54:49.997720 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.112664 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.149545 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/pull/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.187957 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/pull/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.331485 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/pull/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.338884 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.392539 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctq8ql_5cca2967-8925-4c9d-8e9f-8912305e7163/extract/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.527865 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.674010 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.675806 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/pull/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.690542 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/pull/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.908118 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/extract/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.924893 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/util/0.log" Jan 29 12:54:50 crc kubenswrapper[4660]: I0129 12:54:50.960827 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjptj_e86dcb10-336d-41f0-a29e-bcf75712b335/pull/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.085952 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-utilities/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.249750 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-content/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.296442 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-content/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.296459 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-utilities/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.466869 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-content/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.466990 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/extract-utilities/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.749848 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-utilities/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.875284 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-trzg2_f78af36d-0cfb-438d-9763-cff2b46f13f7/registry-server/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.957850 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-utilities/0.log" Jan 29 12:54:51 crc kubenswrapper[4660]: I0129 12:54:51.986539 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-content/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.023849 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-content/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.140472 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-content/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.183635 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/extract-utilities/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.405940 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-shqms_f5d3a7e0-3f4d-4e66-ad34-4c7835e0b625/marketplace-operator/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.584709 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kxjhk_4f109fce-f7d8-4f49-970d-14950db78713/registry-server/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.630740 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-utilities/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.764501 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-content/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.767400 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-utilities/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.791484 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-content/0.log" Jan 29 12:54:52 crc kubenswrapper[4660]: I0129 12:54:52.976713 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-utilities/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.022783 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/extract-content/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.115072 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7lkhw_1b24d899-bb6c-465e-8e0f-594f8581b035/registry-server/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.230926 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-utilities/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.406792 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-content/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.434280 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-content/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.455380 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-utilities/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.612091 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-utilities/0.log" Jan 29 12:54:53 crc kubenswrapper[4660]: I0129 12:54:53.722990 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/extract-content/0.log" Jan 29 12:54:54 crc kubenswrapper[4660]: I0129 12:54:54.025761 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gfh45_c5f2d9bf-38d4-4484-a429-37373a55db37/registry-server/0.log" Jan 29 12:54:56 crc kubenswrapper[4660]: I0129 12:54:56.269669 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:54:56 crc kubenswrapper[4660]: I0129 12:54:56.270000 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.269843 4660 patch_prober.go:28] interesting pod/machine-config-daemon-mdfz2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.270370 4660 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.270409 4660 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.270979 4660 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351"} pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.271036 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerName="machine-config-daemon" containerID="cri-o://e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" gracePeriod=600 Jan 29 12:55:26 crc kubenswrapper[4660]: E0129 12:55:26.395890 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.971821 4660 generic.go:334] "Generic (PLEG): container finished" podID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" exitCode=0 Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.971866 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerDied","Data":"e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351"} Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.971946 4660 scope.go:117] "RemoveContainer" containerID="78c14d96f551daa89edbdac1885e3352998009dccbe8593bbae73ef82702c98d" Jan 29 12:55:26 crc kubenswrapper[4660]: I0129 12:55:26.972488 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:55:26 crc kubenswrapper[4660]: E0129 12:55:26.972830 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:55:43 crc kubenswrapper[4660]: I0129 12:55:43.474377 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:55:43 crc kubenswrapper[4660]: E0129 12:55:43.475003 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:55:56 crc kubenswrapper[4660]: I0129 12:55:56.469623 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:55:56 crc kubenswrapper[4660]: E0129 12:55:56.470346 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:56:03 crc kubenswrapper[4660]: I0129 12:56:03.607512 4660 generic.go:334] "Generic (PLEG): container finished" podID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerID="a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e" exitCode=0 Jan 29 12:56:03 crc kubenswrapper[4660]: I0129 12:56:03.607587 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zr55v/must-gather-hg7fs" event={"ID":"69ad4983-667d-4021-89eb-e3145bd9b2df","Type":"ContainerDied","Data":"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e"} Jan 29 12:56:03 crc kubenswrapper[4660]: I0129 12:56:03.608538 4660 scope.go:117] "RemoveContainer" containerID="a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e" Jan 29 12:56:04 crc kubenswrapper[4660]: I0129 12:56:04.480774 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zr55v_must-gather-hg7fs_69ad4983-667d-4021-89eb-e3145bd9b2df/gather/0.log" Jan 29 12:56:07 crc kubenswrapper[4660]: I0129 12:56:07.471387 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:56:07 crc kubenswrapper[4660]: E0129 12:56:07.472251 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:56:12 crc kubenswrapper[4660]: I0129 12:56:12.636925 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zr55v/must-gather-hg7fs"] Jan 29 12:56:12 crc kubenswrapper[4660]: I0129 12:56:12.638908 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zr55v/must-gather-hg7fs" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="copy" containerID="cri-o://38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f" gracePeriod=2 Jan 29 12:56:12 crc kubenswrapper[4660]: I0129 12:56:12.643436 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zr55v/must-gather-hg7fs"] Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.018152 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zr55v_must-gather-hg7fs_69ad4983-667d-4021-89eb-e3145bd9b2df/copy/0.log" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.018826 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.049294 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output\") pod \"69ad4983-667d-4021-89eb-e3145bd9b2df\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.049780 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wmd\" (UniqueName: \"kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd\") pod \"69ad4983-667d-4021-89eb-e3145bd9b2df\" (UID: \"69ad4983-667d-4021-89eb-e3145bd9b2df\") " Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.061064 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd" (OuterVolumeSpecName: "kube-api-access-m4wmd") pod "69ad4983-667d-4021-89eb-e3145bd9b2df" (UID: "69ad4983-667d-4021-89eb-e3145bd9b2df"). InnerVolumeSpecName "kube-api-access-m4wmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.128380 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "69ad4983-667d-4021-89eb-e3145bd9b2df" (UID: "69ad4983-667d-4021-89eb-e3145bd9b2df"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.151174 4660 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ad4983-667d-4021-89eb-e3145bd9b2df-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.151215 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wmd\" (UniqueName: \"kubernetes.io/projected/69ad4983-667d-4021-89eb-e3145bd9b2df-kube-api-access-m4wmd\") on node \"crc\" DevicePath \"\"" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.479515 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" path="/var/lib/kubelet/pods/69ad4983-667d-4021-89eb-e3145bd9b2df/volumes" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.700677 4660 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zr55v_must-gather-hg7fs_69ad4983-667d-4021-89eb-e3145bd9b2df/copy/0.log" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.701085 4660 generic.go:334] "Generic (PLEG): container finished" podID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerID="38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f" exitCode=143 Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.701140 4660 scope.go:117] "RemoveContainer" containerID="38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.701175 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zr55v/must-gather-hg7fs" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.717135 4660 scope.go:117] "RemoveContainer" containerID="a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.767220 4660 scope.go:117] "RemoveContainer" containerID="38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f" Jan 29 12:56:13 crc kubenswrapper[4660]: E0129 12:56:13.767763 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f\": container with ID starting with 38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f not found: ID does not exist" containerID="38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.767815 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f"} err="failed to get container status \"38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f\": rpc error: code = NotFound desc = could not find container \"38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f\": container with ID starting with 38ddc15c162462885246d1bd50242726e760226a917b86e14ef2a5d162b1779f not found: ID does not exist" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.767840 4660 scope.go:117] "RemoveContainer" containerID="a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e" Jan 29 12:56:13 crc kubenswrapper[4660]: E0129 12:56:13.768211 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e\": container with ID starting with a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e not found: ID does not exist" containerID="a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e" Jan 29 12:56:13 crc kubenswrapper[4660]: I0129 12:56:13.768235 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e"} err="failed to get container status \"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e\": rpc error: code = NotFound desc = could not find container \"a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e\": container with ID starting with a0303e6f9c5b9a031dc702d10bfcd441b327a8aff45827bcbb065ccff4f6994e not found: ID does not exist" Jan 29 12:56:19 crc kubenswrapper[4660]: I0129 12:56:19.469996 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:56:19 crc kubenswrapper[4660]: E0129 12:56:19.470554 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:56:33 crc kubenswrapper[4660]: I0129 12:56:33.473567 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:56:33 crc kubenswrapper[4660]: E0129 12:56:33.475111 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:56:44 crc kubenswrapper[4660]: I0129 12:56:44.470421 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:56:44 crc kubenswrapper[4660]: E0129 12:56:44.471997 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:56:57 crc kubenswrapper[4660]: I0129 12:56:57.469945 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:56:57 crc kubenswrapper[4660]: E0129 12:56:57.470782 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:57:09 crc kubenswrapper[4660]: I0129 12:57:09.470401 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:57:09 crc kubenswrapper[4660]: E0129 12:57:09.471595 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:57:20 crc kubenswrapper[4660]: I0129 12:57:20.470932 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:57:20 crc kubenswrapper[4660]: E0129 12:57:20.471804 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:57:35 crc kubenswrapper[4660]: I0129 12:57:35.470232 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:57:35 crc kubenswrapper[4660]: E0129 12:57:35.471560 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:57:50 crc kubenswrapper[4660]: I0129 12:57:50.470558 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:57:50 crc kubenswrapper[4660]: E0129 12:57:50.471386 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.347601 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:57:54 crc kubenswrapper[4660]: E0129 12:57:54.348375 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="copy" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348395 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="copy" Jan 29 12:57:54 crc kubenswrapper[4660]: E0129 12:57:54.348412 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="gather" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348423 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="gather" Jan 29 12:57:54 crc kubenswrapper[4660]: E0129 12:57:54.348441 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="extract-content" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348453 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="extract-content" Jan 29 12:57:54 crc kubenswrapper[4660]: E0129 12:57:54.348471 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="extract-utilities" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348482 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="extract-utilities" Jan 29 12:57:54 crc kubenswrapper[4660]: E0129 12:57:54.348501 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="registry-server" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348513 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="registry-server" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348750 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="gather" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348781 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ad4983-667d-4021-89eb-e3145bd9b2df" containerName="copy" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.348791 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f81985-302b-4463-9c75-62483c05afa3" containerName="registry-server" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.355527 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.361741 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.452055 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4rz\" (UniqueName: \"kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.452114 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.452159 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.553120 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4rz\" (UniqueName: \"kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.553191 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.553249 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.554329 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.554424 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.574854 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4rz\" (UniqueName: \"kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz\") pod \"certified-operators-dfl6w\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.681516 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:57:54 crc kubenswrapper[4660]: I0129 12:57:54.946554 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:57:55 crc kubenswrapper[4660]: I0129 12:57:55.025276 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerStarted","Data":"f29f695a6648bba0baebf8bc1569a8ac6e719b680f4d1622a880d9a75c28dc1e"} Jan 29 12:57:56 crc kubenswrapper[4660]: I0129 12:57:56.039538 4660 generic.go:334] "Generic (PLEG): container finished" podID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerID="13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f" exitCode=0 Jan 29 12:57:56 crc kubenswrapper[4660]: I0129 12:57:56.039822 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerDied","Data":"13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f"} Jan 29 12:57:57 crc kubenswrapper[4660]: I0129 12:57:57.050722 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerStarted","Data":"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0"} Jan 29 12:57:58 crc kubenswrapper[4660]: I0129 12:57:58.065436 4660 generic.go:334] "Generic (PLEG): container finished" podID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerID="78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0" exitCode=0 Jan 29 12:57:58 crc kubenswrapper[4660]: I0129 12:57:58.065500 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerDied","Data":"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0"} Jan 29 12:57:59 crc kubenswrapper[4660]: I0129 12:57:59.076216 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerStarted","Data":"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2"} Jan 29 12:57:59 crc kubenswrapper[4660]: I0129 12:57:59.103151 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dfl6w" podStartSLOduration=2.682842319 podStartE2EDuration="5.103135924s" podCreationTimestamp="2026-01-29 12:57:54 +0000 UTC" firstStartedPulling="2026-01-29 12:57:56.043047638 +0000 UTC m=+3113.265989770" lastFinishedPulling="2026-01-29 12:57:58.463341243 +0000 UTC m=+3115.686283375" observedRunningTime="2026-01-29 12:57:59.098645366 +0000 UTC m=+3116.321587518" watchObservedRunningTime="2026-01-29 12:57:59.103135924 +0000 UTC m=+3116.326078056" Jan 29 12:58:02 crc kubenswrapper[4660]: I0129 12:58:02.470679 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:58:02 crc kubenswrapper[4660]: E0129 12:58:02.471253 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:58:04 crc kubenswrapper[4660]: I0129 12:58:04.681947 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:04 crc kubenswrapper[4660]: I0129 12:58:04.682018 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:04 crc kubenswrapper[4660]: I0129 12:58:04.740631 4660 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:05 crc kubenswrapper[4660]: I0129 12:58:05.179327 4660 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:05 crc kubenswrapper[4660]: I0129 12:58:05.240541 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:58:07 crc kubenswrapper[4660]: I0129 12:58:07.145236 4660 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dfl6w" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="registry-server" containerID="cri-o://5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2" gracePeriod=2 Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.152519 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.154929 4660 generic.go:334] "Generic (PLEG): container finished" podID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerID="5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2" exitCode=0 Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.154975 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerDied","Data":"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2"} Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.155003 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dfl6w" event={"ID":"104c0c5d-caaa-4b8a-84eb-3db527cee190","Type":"ContainerDied","Data":"f29f695a6648bba0baebf8bc1569a8ac6e719b680f4d1622a880d9a75c28dc1e"} Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.155022 4660 scope.go:117] "RemoveContainer" containerID="5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.177058 4660 scope.go:117] "RemoveContainer" containerID="78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.194911 4660 scope.go:117] "RemoveContainer" containerID="13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.216610 4660 scope.go:117] "RemoveContainer" containerID="5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2" Jan 29 12:58:08 crc kubenswrapper[4660]: E0129 12:58:08.217142 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2\": container with ID starting with 5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2 not found: ID does not exist" containerID="5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.217184 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2"} err="failed to get container status \"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2\": rpc error: code = NotFound desc = could not find container \"5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2\": container with ID starting with 5b9424e2d2ebe8f66602312361fb6ddf21f0e9c0b7c3d256b4b69a17a4849bd2 not found: ID does not exist" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.217211 4660 scope.go:117] "RemoveContainer" containerID="78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0" Jan 29 12:58:08 crc kubenswrapper[4660]: E0129 12:58:08.217574 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0\": container with ID starting with 78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0 not found: ID does not exist" containerID="78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.217625 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0"} err="failed to get container status \"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0\": rpc error: code = NotFound desc = could not find container \"78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0\": container with ID starting with 78d07e94aed1be66e9e816d9de6abf68f8cb273811ddee4e2804715a49c299a0 not found: ID does not exist" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.217659 4660 scope.go:117] "RemoveContainer" containerID="13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f" Jan 29 12:58:08 crc kubenswrapper[4660]: E0129 12:58:08.217949 4660 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f\": container with ID starting with 13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f not found: ID does not exist" containerID="13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.217971 4660 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f"} err="failed to get container status \"13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f\": rpc error: code = NotFound desc = could not find container \"13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f\": container with ID starting with 13eb247e78897844ee60599129bd0dffbcc1dad1e1431cb8cd416a52f6804e8f not found: ID does not exist" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.251728 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4rz\" (UniqueName: \"kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz\") pod \"104c0c5d-caaa-4b8a-84eb-3db527cee190\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.251841 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities\") pod \"104c0c5d-caaa-4b8a-84eb-3db527cee190\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.251930 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content\") pod \"104c0c5d-caaa-4b8a-84eb-3db527cee190\" (UID: \"104c0c5d-caaa-4b8a-84eb-3db527cee190\") " Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.253167 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities" (OuterVolumeSpecName: "utilities") pod "104c0c5d-caaa-4b8a-84eb-3db527cee190" (UID: "104c0c5d-caaa-4b8a-84eb-3db527cee190"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.257348 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz" (OuterVolumeSpecName: "kube-api-access-4b4rz") pod "104c0c5d-caaa-4b8a-84eb-3db527cee190" (UID: "104c0c5d-caaa-4b8a-84eb-3db527cee190"). InnerVolumeSpecName "kube-api-access-4b4rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.299749 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "104c0c5d-caaa-4b8a-84eb-3db527cee190" (UID: "104c0c5d-caaa-4b8a-84eb-3db527cee190"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.353903 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4rz\" (UniqueName: \"kubernetes.io/projected/104c0c5d-caaa-4b8a-84eb-3db527cee190-kube-api-access-4b4rz\") on node \"crc\" DevicePath \"\"" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.353984 4660 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:58:08 crc kubenswrapper[4660]: I0129 12:58:08.354014 4660 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c0c5d-caaa-4b8a-84eb-3db527cee190-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:58:09 crc kubenswrapper[4660]: I0129 12:58:09.166307 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dfl6w" Jan 29 12:58:09 crc kubenswrapper[4660]: I0129 12:58:09.218035 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:58:09 crc kubenswrapper[4660]: I0129 12:58:09.225341 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dfl6w"] Jan 29 12:58:09 crc kubenswrapper[4660]: I0129 12:58:09.479288 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" path="/var/lib/kubelet/pods/104c0c5d-caaa-4b8a-84eb-3db527cee190/volumes" Jan 29 12:58:14 crc kubenswrapper[4660]: I0129 12:58:14.470003 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:58:14 crc kubenswrapper[4660]: E0129 12:58:14.470474 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:58:28 crc kubenswrapper[4660]: I0129 12:58:28.470117 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:58:28 crc kubenswrapper[4660]: E0129 12:58:28.470863 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:58:43 crc kubenswrapper[4660]: I0129 12:58:43.474639 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:58:43 crc kubenswrapper[4660]: E0129 12:58:43.477049 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:58:58 crc kubenswrapper[4660]: I0129 12:58:58.469562 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:58:58 crc kubenswrapper[4660]: E0129 12:58:58.470360 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:59:12 crc kubenswrapper[4660]: I0129 12:59:12.470218 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:59:12 crc kubenswrapper[4660]: E0129 12:59:12.471259 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:59:24 crc kubenswrapper[4660]: I0129 12:59:24.470117 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:59:24 crc kubenswrapper[4660]: E0129 12:59:24.471268 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:59:37 crc kubenswrapper[4660]: I0129 12:59:37.472122 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:59:37 crc kubenswrapper[4660]: E0129 12:59:37.472923 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 12:59:49 crc kubenswrapper[4660]: I0129 12:59:49.470614 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 12:59:49 crc kubenswrapper[4660]: E0129 12:59:49.471411 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.156112 4660 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz"] Jan 29 13:00:00 crc kubenswrapper[4660]: E0129 13:00:00.157008 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="extract-content" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.157025 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="extract-content" Jan 29 13:00:00 crc kubenswrapper[4660]: E0129 13:00:00.157043 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="extract-utilities" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.157051 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="extract-utilities" Jan 29 13:00:00 crc kubenswrapper[4660]: E0129 13:00:00.157063 4660 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.157071 4660 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.157273 4660 memory_manager.go:354] "RemoveStaleState removing state" podUID="104c0c5d-caaa-4b8a-84eb-3db527cee190" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.157846 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.160650 4660 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.160660 4660 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.178912 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz"] Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.211461 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdnfx\" (UniqueName: \"kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.211839 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.211983 4660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.313676 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdnfx\" (UniqueName: \"kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.313740 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.313794 4660 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.315096 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.321287 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.329841 4660 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdnfx\" (UniqueName: \"kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx\") pod \"collect-profiles-29494860-qvrnz\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.469603 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 13:00:00 crc kubenswrapper[4660]: E0129 13:00:00.469880 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.485103 4660 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:00 crc kubenswrapper[4660]: I0129 13:00:00.918969 4660 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz"] Jan 29 13:00:01 crc kubenswrapper[4660]: I0129 13:00:01.073524 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" event={"ID":"95c559be-b619-4bbb-aa60-05e328fe1b13","Type":"ContainerStarted","Data":"3a18fbe4c31e6b7d4b30f78ff2443481e9c7cae3a92cc349cac3a33ee0db03fa"} Jan 29 13:00:01 crc kubenswrapper[4660]: I0129 13:00:01.074196 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" event={"ID":"95c559be-b619-4bbb-aa60-05e328fe1b13","Type":"ContainerStarted","Data":"dcbdf7517368939a9d0a344a73151005d0801a8f0f5924f477e141896d094b6a"} Jan 29 13:00:01 crc kubenswrapper[4660]: I0129 13:00:01.090737 4660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" podStartSLOduration=1.090715745 podStartE2EDuration="1.090715745s" podCreationTimestamp="2026-01-29 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 13:00:01.086583627 +0000 UTC m=+3238.309525769" watchObservedRunningTime="2026-01-29 13:00:01.090715745 +0000 UTC m=+3238.313657887" Jan 29 13:00:02 crc kubenswrapper[4660]: I0129 13:00:02.083746 4660 generic.go:334] "Generic (PLEG): container finished" podID="95c559be-b619-4bbb-aa60-05e328fe1b13" containerID="3a18fbe4c31e6b7d4b30f78ff2443481e9c7cae3a92cc349cac3a33ee0db03fa" exitCode=0 Jan 29 13:00:02 crc kubenswrapper[4660]: I0129 13:00:02.083799 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" event={"ID":"95c559be-b619-4bbb-aa60-05e328fe1b13","Type":"ContainerDied","Data":"3a18fbe4c31e6b7d4b30f78ff2443481e9c7cae3a92cc349cac3a33ee0db03fa"} Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.424771 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.467418 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdnfx\" (UniqueName: \"kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx\") pod \"95c559be-b619-4bbb-aa60-05e328fe1b13\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.467520 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume\") pod \"95c559be-b619-4bbb-aa60-05e328fe1b13\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.467569 4660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume\") pod \"95c559be-b619-4bbb-aa60-05e328fe1b13\" (UID: \"95c559be-b619-4bbb-aa60-05e328fe1b13\") " Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.468667 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume" (OuterVolumeSpecName: "config-volume") pod "95c559be-b619-4bbb-aa60-05e328fe1b13" (UID: "95c559be-b619-4bbb-aa60-05e328fe1b13"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.490033 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx" (OuterVolumeSpecName: "kube-api-access-vdnfx") pod "95c559be-b619-4bbb-aa60-05e328fe1b13" (UID: "95c559be-b619-4bbb-aa60-05e328fe1b13"). InnerVolumeSpecName "kube-api-access-vdnfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.492914 4660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95c559be-b619-4bbb-aa60-05e328fe1b13" (UID: "95c559be-b619-4bbb-aa60-05e328fe1b13"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.570770 4660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdnfx\" (UniqueName: \"kubernetes.io/projected/95c559be-b619-4bbb-aa60-05e328fe1b13-kube-api-access-vdnfx\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.570820 4660 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95c559be-b619-4bbb-aa60-05e328fe1b13-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:03 crc kubenswrapper[4660]: I0129 13:00:03.570847 4660 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c559be-b619-4bbb-aa60-05e328fe1b13-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:04 crc kubenswrapper[4660]: I0129 13:00:04.106965 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" event={"ID":"95c559be-b619-4bbb-aa60-05e328fe1b13","Type":"ContainerDied","Data":"dcbdf7517368939a9d0a344a73151005d0801a8f0f5924f477e141896d094b6a"} Jan 29 13:00:04 crc kubenswrapper[4660]: I0129 13:00:04.107534 4660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbdf7517368939a9d0a344a73151005d0801a8f0f5924f477e141896d094b6a" Jan 29 13:00:04 crc kubenswrapper[4660]: I0129 13:00:04.107184 4660 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-qvrnz" Jan 29 13:00:04 crc kubenswrapper[4660]: I0129 13:00:04.500828 4660 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh"] Jan 29 13:00:04 crc kubenswrapper[4660]: I0129 13:00:04.508442 4660 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-2fpsh"] Jan 29 13:00:05 crc kubenswrapper[4660]: I0129 13:00:05.479638 4660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189d7d7d-910a-41ae-bee0-0fa4ac0e90d4" path="/var/lib/kubelet/pods/189d7d7d-910a-41ae-bee0-0fa4ac0e90d4/volumes" Jan 29 13:00:15 crc kubenswrapper[4660]: I0129 13:00:15.470062 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 13:00:15 crc kubenswrapper[4660]: E0129 13:00:15.470836 4660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdfz2_openshift-machine-config-operator(1d28a7f3-5242-4198-9ea8-6e12d67b4fa8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" podUID="1d28a7f3-5242-4198-9ea8-6e12d67b4fa8" Jan 29 13:00:17 crc kubenswrapper[4660]: I0129 13:00:17.411799 4660 scope.go:117] "RemoveContainer" containerID="b188f9c7a9e2b7415c620c45006101ac0b112977b52b2569f1f291412d65e3f4" Jan 29 13:00:29 crc kubenswrapper[4660]: I0129 13:00:29.470436 4660 scope.go:117] "RemoveContainer" containerID="e6177ca677c666d3de93117d63d098b12a931707c85fac1ff82dd77cca7ea351" Jan 29 13:00:30 crc kubenswrapper[4660]: I0129 13:00:30.285163 4660 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdfz2" event={"ID":"1d28a7f3-5242-4198-9ea8-6e12d67b4fa8","Type":"ContainerStarted","Data":"e988ce0f1fa92f64683a5427212a58444864d524ee9836677743f8c5e6d234ea"}